GRDB: A toolkit for SQLite databases, with a focus on application development

A toolkit for SQLite databases, with a focus on application development

Swift 5.3 Platforms License CI Status


Latest release: September 8, 2022 • version 5.26.1CHANGELOGMigrating From GRDB 4 to GRDB 5

Requirements: iOS 11.0+ / macOS 10.10+ / tvOS 9.0+ / watchOS 2.0+ • SQLite 3.8.5+ • Swift 5.3+ / Xcode 12+

Swift version GRDB version
Swift 5.3+ v5.26.1
Swift 5.2 v5.12.0
Swift 5.1 v4.14.0
Swift 5 v4.14.0
Swift 4.2 v4.14.0
Swift 4.1 v3.7.0
Swift 4 v2.10.0
Swift 3.2 v1.3.0
Swift 3.1 v1.3.0
Swift 3 v1.0
Swift 2.3 v0.81.2
Swift 2.2 v0.80.2

Contact:

What is this?

GRDB provides raw access to SQL and advanced SQLite features, because one sometimes enjoys a sharp tool. It has robust concurrency primitives, so that multi-threaded applications can efficiently use their databases. It grants your application models with persistence and fetching methods, so that you don’t have to deal with SQL and raw database rows when you don’t want to.

Compared to SQLite.swift or FMDB, GRDB can spare you a lot of glue code. Compared to Core Data or Realm, it can simplify your multi-threaded applications.

It comes with up-to-date documentation, general guides, and it is fast.

See Why Adopt GRDB? if you are looking for your favorite database library.


FeaturesUsageInstallationDocumentationFAQ


Features

Programming tools for both database beginners and SQLite experts:

In-depth integration with our programming environment:

  • Swift Concurrency: try await your database (Xcode 13.3.1+).
  • SwiftUI: Access and observe the database from your SwiftUI views.
  • Combine: Access and observe the database with Combine publishers.
  • RxSwift: Access and observe the database with RxSwift observables.

Usage

Start using the database in four steps “`swift import GRDB // 1. Open a database connection let dbQueue = try DatabaseQueue(path: ”/path/to/database.sqlite") // 2. Define the database schema try dbQueue.write { db in try db.create(table: “player”) { t in t.autoIncrementedPrimaryKey(“id”) t.column(“name”, .text).notNull() t.column(“score”, .integer).notNull() } } // 3. Define a record type struct Player: Codable, FetchableRecord, PersistableRecord { var id: Int64 var name: String var score: Int } // 4. Access the database try dbQueue.write { db in try Player(id: 1, name: “Arthur”, score: 100).insert(db) try Player(id: 2, name: “Barbara”, score: 1000).insert(db) } let players: [Player] = try dbQueue.read { db in try Player.fetchAll(db) } “`
Activate the WAL mode ”`swift import GRDB // Simple database connection let dbQueue = try DatabaseQueue(path: “/path/to/database.sqlite”) // Enhanced multithreading based on SQLite’s WAL mode let dbPool = try DatabasePool(path: “/path/to/database.sqlite”) “` See [Database Connections](#database-connections)
Access to raw SQL ”`swift try dbQueue.write { db in try db.execute(sql: “”“ CREATE TABLE place ( id INTEGER PRIMARY KEY AUTOINCREMENT, title TEXT NOT NULL, favorite BOOLEAN NOT NULL DEFAULT 0, latitude DOUBLE NOT NULL, longitude DOUBLE NOT NULL) ”“”) try db.execute(sql: “”“ INSERT INTO place (title, favorite, latitude, longitude) VALUES (?, ?, ?, ?) ”“”, arguments: [“Paris”, true, 48.85341, 2.3488]) let parisId = db.lastInsertedRowID // Avoid SQL injection with SQL interpolation try db.execute(literal: “”“ INSERT INTO place (title, favorite, latitude, longitude) VALUES (\("King’s Cross”), \(true), \(51.52151), \(-0.12763)) “”“) } ”` See [Executing Updates](#executing-updates)
Access to raw database rows and values “`swift try dbQueue.read { db in // Fetch database rows let rows = try Row.fetchCursor(db, sql: "SELECT * FROM place”) while let row = try rows.next() { let title: String = row[“title”] let isFavorite: Bool = row[“favorite”] let coordinate = CLLocationCoordinate2D( latitude: row[“latitude”], longitude: row[“longitude”]) } // Fetch values let placeCount = try Int.fetchOne(db, sql: “SELECT COUNT(*) FROM place”)! // Int let placeTitles = try String.fetchAll(db, sql: “SELECT title FROM place”) // [String] } let placeCount = try dbQueue.read { db in try Int.fetchOne(db, sql: “SELECT COUNT(*) FROM place”)! } “` See [Fetch Queries](#fetch-queries)
Database model types aka "records” “`swift struct Place { var id: Int64? var title: String var isFavorite: Bool var coordinate: CLLocationCoordinate2D } // snip: turn Place into a "record” by adopting the protocols that // provide fetching and persistence methods. try dbQueue.write { db in // Create database table try db.create(table: “place”) { t in t.autoIncrementedPrimaryKey(“id”) t.column(“title”, .text).notNull() t.column(“favorite”, .boolean).notNull().defaults(to: false) t.column(“longitude”, .double).notNull() t.column(“latitude”, .double).notNull() } var berlin = Place( id: nil, title: “Berlin”, isFavorite: false, coordinate: CLLocationCoordinate2D(latitude: 52.52437, longitude: 13.41053)) try berlin.insert(db) berlin.id // some value berlin.isFavorite = true try berlin.update(db) } “` See [Records](#records)
Query the database with the Swift query interface ”`swift try dbQueue.read { db in // Place? let paris = try Place.fetchOne(db, id: 1) // Place? let berlin = try Place.filter(Column(“title”) == “Berlin”).fetchOne(db) // [Place] let favoritePlaces = try Place .filter(Column(“favorite”) == true) .order(Column(“title”)) .fetchAll(db) // Int let favoriteCount = try Place.filter(Column(“favorite”)).fetchCount(db) // SQL is always welcome let places = try Place.fetchAll(db, sql: “SELECT * FROM place”) } “` See the [Query Interface](#the-query-interface)
Database changes notifications ”`swift // Define the observed value let observation = ValueObservation.tracking { db in try Place.fetchAll(db) } // Start observation let cancellable = observation.start( in: dbQueue, onError: { error in … } onChange: { (places: [Place]) in print(“Fresh places: \(places)”) }) “` Ready-made support for Combine and RxSwift: ”`swift // Combine let cancellable = observation.publisher(in: dbQueue).sink( receiveCompletion: { completion in … }, receiveValue: { (places: [Place]) in print(“Fresh places: \(places)”) }) // RxSwift let disposable = observation.rx.observe(in: dbQueue).subscribe( onNext: { (places: [Place]) in print(“Fresh places: \(places)”) }, onError: { error in … }) “` See [Database Observation], [Combine Support], [RxGRDB].

Documentation

GRDB runs on top of SQLite: you should get familiar with the SQLite FAQ. For general and detailed information, jump to the SQLite Documentation.

Demo Applications & Frequently Asked Questions

Reference

Getting Started

SQLite and SQL

Records and the Query Interface

Application Tools

Good to Know

General Guides & Good Practices

FAQ

Sample Code

Installation

The installation procedures below have GRDB use the version of SQLite that ships with the target operating system.

See Encryption for the installation procedure of GRDB with SQLCipher.

See Custom SQLite builds for the installation procedure of GRDB with a customized build of SQLite.

See Enabling FTS5 Support for the installation procedure of GRDB with support for the FTS5 full-text engine.

CocoaPods

CocoaPods is a dependency manager for Xcode projects. To use GRDB with CocoaPods (version 1.2 or higher), specify in your Podfile:

pod 'GRDB.swift'

GRDB can be installed as a framework, or a static library.

Swift Package Manager

The Swift Package Manager automates the distribution of Swift code. To use GRDB with SPM, add a dependency to https://github.com/groue/GRDB.swift.git

GRDB offers two libraries, GRDB and GRDB-dynamic. Pick only one. When in doubt, prefer GRDB. The GRDB-dynamic library can reveal useful if you are going to link it with multiple targets within your app and only wish to link to a shared, dynamic framework once. See How to link a Swift Package as dynamic for more information.

:point_up: Note: Linux is not currently supported.

:warning: Warning: Due to an Xcode bug, you will get “No such module ‘CSQLite’” errors when you want to embed the GRDB package in other targets than the main application (watch extensions, for example). UI and Unit testing targets are OK, though. See #642 for more information.

Carthage

Carthage is unsupported. For some context about this decision, see #433.

Manually

  1. Download a copy of GRDB, or clone its repository and make sure you checkout the latest tagged version.

  2. Embed the GRDB.xcodeproj project in your own project.

  3. Add the GRDBOSX, GRDBiOS, GRDBtvOS, or GRDBWatchOS target in the Target Dependencies section of the Build Phases tab of your application target (extension target for WatchOS).

  4. Add the GRDB.framework from the targeted platform to the Embedded Binaries section of the General tab of your application target (extension target for WatchOS).

:bulb: Tip: see the Demo Applications for examples of such integration.

Database Connections

GRDB provides two classes for accessing SQLite databases: DatabaseQueue and DatabasePool:

import GRDB

// Pick one:
let dbQueue = try DatabaseQueue(path: "/path/to/database.sqlite")
let dbPool = try DatabasePool(path: "/path/to/database.sqlite")

The differences are:

  • Database pools allow concurrent database accesses (this can improve the performance of multithreaded applications).
  • Database pools open your SQLite database in the WAL mode (unless read-only).
  • Database queues support in-memory databases.

If you are not sure, choose DatabaseQueue. You will always be able to switch to DatabasePool later.

Database Queues

Open a database queue with the path to a database file:

import GRDB

let dbQueue = try DatabaseQueue(path: "/path/to/database.sqlite")
let inMemoryDBQueue = DatabaseQueue()

SQLite creates the database file if it does not already exist. The connection is closed when the database queue gets deinitialized.

A database queue can be used from any thread. The write and read methods are synchronous, and block the current thread until your database statements are executed in a protected dispatch queue:

// Modify the database:
try dbQueue.write { db in
    try db.create(table: "place") { ... }
    try Place(...).insert(db)
}

// Read values:
try dbQueue.read { db in
    let places = try Place.fetchAll(db)
    let placeCount = try Place.fetchCount(db)
}

Database access methods can return values:

let placeCount = try dbQueue.read { db in
    try Place.fetchCount(db)
}

let newPlaceCount = try dbQueue.write { db -> Int in
    try Place(...).insert(db)
    return try Place.fetchCount(db)
}

See the Concurrency guide for asynchronous database accesses.

A database queue serializes accesses to the database, which means that there is never more than one thread that uses the database.

  • When you don’t need to modify the database, prefer the read method. It prevents any modification to the database.

  • The write method wraps your database statements in a transaction that commits if and only if no error occurs. On the first unhandled error, all changes are reverted, the whole transaction is rollbacked, and the error is rethrown.

    When precise transaction handling is required, see Transactions and Savepoints.

A database queue needs your application to follow rules in order to deliver its safety guarantees. Please refer to the Concurrency guide.

See Database Configuration for DatabaseQueue options.

:bulb: Tip: see the Demo Applications for sample code.

Database Pools

A database pool allows concurrent database accesses.

import GRDB
let dbPool = try DatabasePool(path: "/path/to/database.sqlite")

SQLite creates the database file if it does not already exist. The connection is closed when the database pool gets deinitialized.

:point_up: Note: unless read-only, a database pool opens your database in the SQLite “WAL mode”. The WAL mode does not fit all situations. Please have a look at https://www.sqlite.org/wal.html.

A database pool can be used from any thread. The write and read methods are synchronous, and block the current thread until your database statements are executed in a protected dispatch queue:

// Modify the database:
try dbPool.write { db in
    try db.create(table: "place") { ... }
    try Place(...).insert(db)
}

// Read values:
try dbPool.read { db in
    let places = try Place.fetchAll(db)
    let placeCount = try Place.fetchCount(db)
}

Database access methods can return values:

let placeCount = try dbPool.read { db in
    try Place.fetchCount(db)
}

let newPlaceCount = try dbPool.write { db -> Int in
    try Place(...).insert(db)
    return try Place.fetchCount(db)
}

See the Concurrency guide for asynchronous database accesses.

Database pools allow several threads to access the database at the same time:

  • When you don’t need to modify the database, prefer the read method, because several threads can perform reads in parallel.

    Reads are generally non-blocking, unless the maximum number of concurrent reads has been reached. In this case, a read has to wait for another read to complete. That maximum number can be configured.

  • Reads are guaranteed an immutable view of the last committed state of the database, regardless of concurrent writes. This kind of isolation is called snapshot isolation.

  • Unlike reads, writes are serialized. There is never more than a single thread that is writing into the database.

  • The write method wraps your database statements in a transaction that commits if and only if no error occurs. On the first unhandled error, all changes are reverted, the whole transaction is rollbacked, and the error is rethrown.

    When precise transaction handling is required, see Transactions and Savepoints.

  • Database pools can take snapshots of the database.

A database pool needs your application to follow rules in order to deliver its safety guarantees. See the Concurrency guide for more details about database pools, how they differ from database queues, and advanced use cases.

See Database Configuration for DatabasePool options.

:bulb: Tip: see the Demo Applications for sample code.

Database Configuration

var config = Configuration()
config.readonly = true
config.foreignKeysEnabled = true // Default is already true
config.label = "MyDatabase"      // Useful when your app opens multiple databases
config.maximumReaderCount = 10   // (DatabasePool only) The default is 5

let dbQueue = try DatabaseQueue( // or DatabasePool
    path: "/path/to/database.sqlite",
    configuration: config)

In debug builds, you can increase the verbosity of error descriptions and trace events if you opt in for public statement arguments:

#if DEBUG
// Protect sensitive information by enabling verbose debugging in DEBUG builds only
config.publicStatementArguments = true
#endif

let dbQueue = try DatabaseQueue(path: ..., configuration: config)

do {
    try dbQueue.write { db in
        user.name = ...
        user.location = ...
        user.address = ...
        user.phoneNumber = ...
        try user.save(db)
    }
} catch {
    // Prints sensitive information in debug builds only
    print(error)
}

:warning: Warning: It is your responsibility to prevent sensitive information from leaking in unexpected locations, so you should not set the publicStatementArguments flag in release builds (think about GDPR and other privacy-related rules).

:warning: Warning: The SQLite version that ships with old operating systems (prior to OSX 10.12, tvOS 10.0, and watchOS 3.0) outputs statement arguments in the trace events, regardless of the publicStatementArguments flag.

See Configuration for more details and configuration options.

SQLite API

In this section of the documentation, we will talk SQL. Jump to the query interface if SQL is not your cup of tea.

Advanced topics:

Executing Updates

Once granted with a database connection, the execute method executes the SQL statements that do not return any database row, such as CREATE TABLE, INSERT, DELETE, ALTER, etc.

For example:

try dbQueue.write { db in
    try db.execute(sql: """
        CREATE TABLE player (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            name TEXT NOT NULL,
            score INT)
        """)

    try db.execute(
        sql: "INSERT INTO player (name, score) VALUES (?, ?)",
        arguments: ["Barbara", 1000])

    try db.execute(
        sql: "UPDATE player SET score = :score WHERE id = :id",
        arguments: ["score": 1000, "id": 1])
    }
}

The ? and colon-prefixed keys like :score in the SQL query are the statements arguments. You pass arguments with arrays or dictionaries, as in the example above. See Values for more information on supported arguments types (Bool, Int, String, Date, Swift enums, etc.), and StatementArguments for a detailed documentation of SQLite arguments.

You can also embed query arguments right into your SQL queries, with the literal argument label, as in the example below. See SQL Interpolation for more details.

try dbQueue.write { db in
    try db.execute(literal: """
        INSERT INTO player (name, score) VALUES (\("O'Brien"), \(550))
        """)
}

Never ever embed values directly in your raw SQL strings. See Avoiding SQL Injection for more information:

// WRONG: don't embed values in raw SQL strings
let id = 123
let name = textField.text
try db.execute(
    sql: "UPDATE player SET name = '\(name)' WHERE id = \(id)")

// CORRECT: use arguments dictionary
try db.execute(
    sql: "UPDATE player SET name = :name WHERE id = :id",
    arguments: ["name": name, "id": id])

// CORRECT: use arguments array
try db.execute(
    sql: "UPDATE player SET name = ? WHERE id = ?",
    arguments: [name, id])

// CORRECT: use SQL Interpolation
try db.execute(
    literal: "UPDATE player SET name = \(name) WHERE id = \(id)")

Join multiple statements with a semicolon:

try db.execute(sql: """
    INSERT INTO player (name, score) VALUES (?, ?);
    INSERT INTO player (name, score) VALUES (?, ?);
    """, arguments: ["Arthur", 750, "Barbara", 1000])

try db.execute(literal: """
    INSERT INTO player (name, score) VALUES (\("Arthur"), \(750));
    INSERT INTO player (name, score) VALUES (\("Barbara"), \(1000));
    """)

When you want to make sure that a single statement is executed, use Prepared Statements.

After an INSERT statement, you can get the row ID of the inserted row:

try db.execute(
    sql: "INSERT INTO player (name, score) VALUES (?, ?)",
    arguments: ["Arthur", 1000])
let playerId = db.lastInsertedRowID

Don’t miss Records, that provide classic persistence methods:

var player = Player(name: "Arthur", score: 1000)
try player.insert(db)
let playerId = player.id

Fetch Queries

Database connections let you fetch database rows, plain values, and custom models aka “records”.

Rows are the raw results of SQL queries:

try dbQueue.read { db in
    if let row = try Row.fetchOne(db, sql: "SELECT * FROM wine WHERE id = ?", arguments: [1]) {
        let name: String = row["name"]
        let color: Color = row["color"]
        print(name, color)
    }
}

Values are the Bool, Int, String, Date, Swift enums, etc. stored in row columns:

try dbQueue.read { db in
    let urls = try URL.fetchCursor(db, sql: "SELECT url FROM wine")
    while let url = try urls.next() {
        print(url)
    }
}

Records are your application objects that can initialize themselves from rows:

let wines = try dbQueue.read { db in
    try Wine.fetchAll(db, sql: "SELECT * FROM wine")
}

Fetching Methods

Throughout GRDB, you can always fetch cursors, arrays, sets, or single values of any fetchable type (database row, simple value, or custom record):

try Row.fetchCursor(...) // A Cursor of Row
try Row.fetchAll(...)    // [Row]
try Row.fetchSet(...)    // Set<Row>
try Row.fetchOne(...)    // Row?
  • fetchCursor returns a cursor over fetched values:

    let rows = try Row.fetchCursor(db, sql: "SELECT ...") // A Cursor of Row
    
  • fetchAll returns an array:

    let players = try Player.fetchAll(db, sql: "SELECT ...") // [Player]
    
  • fetchSet returns a set:

    let names = try String.fetchSet(db, sql: "SELECT ...") // Set<String>
    
  • fetchOne returns a single optional value, and consumes a single database row (if any).

    let count = try Int.fetchOne(db, sql: "SELECT COUNT(*) ...") // Int?
    

All those fetching methods require an SQL string that contains a single SQL statement. When you want to fetch from multiple statements joined with a semicolon, iterate the multiple prepared statements found in the SQL string:

let statements = try db.allStatements(sql: """
    SELECT ...; 
    SELECT ...; 
    SELECT ...;
    """)
while let statement = try statements.next() {
    let players = try Player.fetchAll(statement)
}

You can join the results of all statements yielded by the allStatements method, like the SQLite sqlite3_exec function:

// A single cursor of all rows from all statements
let rows = try db
    .allStatements(sql: "...")
    .flatMap { statement in try Row.fetchCursor(statement) }

See prepared statements for more information about allStatements().

Cursors

Whenever you consume several rows from the database, you can fetch an Array, a Set, or a Cursor.

The fetchAll() and fetchSet() methods return regular Swift array and sets, that you iterate like all other arrays and sets:

try dbQueue.read { db in
    // [Player]
    let players = try Player.fetchAll(db, sql: "SELECT ...")
    for player in players {
        // use player
    }
}

Unlike arrays and sets, cursors returned by fetchCursor() load their results step after step:

try dbQueue.read { db in
    // Cursor of Player
    let players = try Player.fetchCursor(db, sql: "SELECT ...")
    while let player = try players.next() {
        // use player
    }
}
  • Cursors can not be used on any thread: you must consume a cursor on the dispatch queue it was created in. Particularly, don’t extract a cursor out of a database access method:

    // Wrong
    let cursor = try dbQueue.read { db in
        try Player.fetchCursor(db, ...)
    }
    while let player = try cursor.next() { ... }
    

    Conversely, arrays and sets may be consumed on any thread:

    // OK
    let array = try dbQueue.read { db in
        try Player.fetchAll(db, ...)
    }
    for player in array { ... }
    
  • Cursors can be iterated only one time. Arrays and sets can be iterated many times.

  • Cursors iterate database results in a lazy fashion, and don’t consume much memory. Arrays and sets contain copies of database values, and may take a lot of memory when there are many fetched results.

  • Cursors are granted with direct access to SQLite, unlike arrays and sets that have to take the time to copy database values. If you look after extra performance, you may prefer cursors.

  • Cursors can feed Swift collections.

    You will most of the time use fetchAll or fetchSet when you want an array or a set. For more specific needs, you may prefer one of the initializers below. All of them accept an extra optional minimumCapacity argument which helps optimizing your app when you have an idea of the number of elements in a cursor (the built-in fetchAll and fetchSet do not perform such an optimization).

    Arrays and all types conforming to RangeReplaceableCollection:

    // [String]
    let cursor = try String.fetchCursor(db, ...)
    let array = try Array(cursor)
    

    Sets:

    // Set<Int>
    let cursor = try Int.fetchCursor(db, ...)
    let set = try Set(cursor)
    

    Dictionaries:

    // [Int64: [Player]]
    let cursor = try Player.fetchCursor(db)
    let dictionary = try Dictionary(grouping: cursor, by: { $0.teamID })
    
    // [Int64: Player]
    let cursor = try Player.fetchCursor(db).map { ($0.id, $0) }
    let dictionary = try Dictionary(uniqueKeysWithValues: cursor)
    
  • Cursors adopt the Cursor protocol, which looks a lot like standard lazy sequences of Swift. As such, cursors come with many convenience methods: compactMap, contains, dropFirst, dropLast, drop(while:), enumerated, filter, first, flatMap, forEach, joined, joined(separator:), max, max(by:), min, min(by:), map, prefix, prefix(while:), reduce, reduce(into:), suffix:

    // Prints all Github links
    try URL
        .fetchCursor(db, sql: "SELECT url FROM link")
        .filter { url in url.host == "github.com" }
        .forEach { url in print(url) }
    
    // An efficient cursor of coordinates:
    let locations = try Row.
        .fetchCursor(db, sql: "SELECT latitude, longitude FROM place")
        .map { row in
            CLLocationCoordinate2D(latitude: row[0], longitude: row[1])
        }
    
  • Cursors are not Swift sequences. That’s because Swift sequences can’t handle iteration errors, when reading SQLite results may fail at any time.

  • Cursors require a little care:

    • Don’t modify the results during a cursor iteration:

      // Undefined behavior
      while let player = try players.next() {
          try db.execute(sql: "DELETE ...")
      }
      
    • Don’t turn a cursor of Row into an array or a set. You would not get the distinct rows you expect. To get a array of rows, use Row.fetchAll(...). To get a set of rows, use Row.fetchSet(...). Generally speaking, make sure you copy a row whenever you extract it from a cursor for later use: row.copy().

If you don’t see, or don’t care about the difference, use arrays. If you care about memory and performance, use cursors when appropriate.

Row Queries

Fetching Rows

Fetch cursors of rows, arrays, sets, or single rows (see fetching methods):

try dbQueue.read { db in
    try Row.fetchCursor(db, sql: "SELECT ...", arguments: ...) // A Cursor of Row
    try Row.fetchAll(db, sql: "SELECT ...", arguments: ...)    // [Row]
    try Row.fetchSet(db, sql: "SELECT ...", arguments: ...)    // Set<Row>
    try Row.fetchOne(db, sql: "SELECT ...", arguments: ...)    // Row?

    let rows = try Row.fetchCursor(db, sql: "SELECT * FROM wine")
    while let row = try rows.next() {
        let name: String = row["name"]
        let color: Color = row["color"]
        print(name, color)
    }
}

let rows = try dbQueue.read { db in
    try Row.fetchAll(db, sql: "SELECT * FROM player")
}

Arguments are optional arrays or dictionaries that fill the positional ? and colon-prefixed keys like :name in the query:

let rows = try Row.fetchAll(db,
    sql: "SELECT * FROM player WHERE name = ?",
    arguments: ["Arthur"])

let rows = try Row.fetchAll(db,
    sql: "SELECT * FROM player WHERE name = :name",
    arguments: ["name": "Arthur"])

See Values for more information on supported arguments types (Bool, Int, String, Date, Swift enums, etc.), and StatementArguments for a detailed documentation of SQLite arguments.

Unlike row arrays that contain copies of the database rows, row cursors are close to the SQLite metal, and require a little care:

:point_up: Don’t turn a cursor of Row into an array or a set. You would not get the distinct rows you expect. To get a array of rows, use Row.fetchAll(...). To get a set of rows, use Row.fetchSet(...). Generally speaking, make sure you copy a row whenever you extract it from a cursor for later use: row.copy().

Column Values

Read column values by index or column name:

let name: String = row[0]      // 0 is the leftmost column
let name: String = row["name"] // Leftmost matching column - lookup is case-insensitive
let name: String = row[Column("name")] // Using query interface's Column

Make sure to ask for an optional when the value may be NULL:

let name: String? = row["name"]

The row[] subscript returns the type you ask for. See Values for more information on supported value types:

let bookCount: Int     = row["bookCount"]
let bookCount64: Int64 = row["bookCount"]
let hasBooks: Bool     = row["bookCount"] // false when 0

let string: String     = row["date"]      // "2015-09-11 18:14:15.123"
let date: Date         = row["date"]      // Date
self.date = row["date"] // Depends on the type of the property.

You can also use the as type casting operator:

row[...] as Int
row[...] as Int?

:warning: Warning: avoid the as! and as? operators:

if let int = row[...] as? Int { ... } // BAD - doesn't work
if let int = row[...] as Int? { ... } // GOOD

Generally speaking, you can extract the type you need, provided it can be converted from the underlying SQLite value:

  • Successful conversions include:

    • All numeric SQLite values to all numeric Swift types, and Bool (zero is the only false boolean).
    • Text SQLite values to Swift String.
    • Blob SQLite values to Foundation Data.

    See Values for more information on supported types (Bool, Int, String, Date, Swift enums, etc.)

  • NULL returns nil.

    let row = try Row.fetchOne(db, sql: "SELECT NULL")!
    row[0] as Int? // nil
    row[0] as Int  // fatal error: could not convert NULL to Int.
    

    There is one exception, though: the DatabaseValue type:

    row[0] as DatabaseValue // DatabaseValue.null
    
  • Missing columns return nil.

    let row = try Row.fetchOne(db, sql: "SELECT 'foo' AS foo")!
    row["missing"] as String? // nil
    row["missing"] as String  // fatal error: no such column: missing
    

    You can explicitly check for a column presence with the hasColumn method.

  • Invalid conversions throw a fatal error.

    let row = try Row.fetchOne(db, sql: "SELECT 'Mom’s birthday'")!
    row[0] as String // "Mom’s birthday"
    row[0] as Date?  // fatal error: could not convert "Mom’s birthday" to Date.
    row[0] as Date   // fatal error: could not convert "Mom’s birthday" to Date.
    
    let row = try Row.fetchOne(db, sql: "SELECT 256")!
    row[0] as Int    // 256
    row[0] as UInt8? // fatal error: could not convert 256 to UInt8.
    row[0] as UInt8  // fatal error: could not convert 256 to UInt8.
    

    Those conversion fatal errors can be avoided with the DatabaseValue type:

    let row = try Row.fetchOne(db, sql: "SELECT 'Mom’s birthday'")!
    let dbValue: DatabaseValue = row[0]
    if dbValue.isNull {
        // Handle NULL
    } else if let date = Date.fromDatabaseValue(dbValue) {
        // Handle valid date
    } else {
        // Handle invalid date
    }
    

    This extra verbosity is the consequence of having to deal with an untrusted database: you may consider fixing the content of your database instead. See Fatal Errors for more information.

  • SQLite has a weak type system, and provides convenience conversions that can turn String to Int, Double to Blob, etc.

    GRDB will sometimes let those conversions go through:

    let rows = try Row.fetchCursor(db, sql: "SELECT '20 small cigars'")
    while let row = try rows.next() {
        row[0] as Int   // 20
    }
    

    Don’t freak out: those conversions did not prevent SQLite from becoming the immensely successful database engine you want to use. And GRDB adds safety checks described just above. You can also prevent those convenience conversions altogether by using the DatabaseValue type.

DatabaseValue

DatabaseValue is an intermediate type between SQLite and your values, which gives information about the raw value stored in the database.

You get DatabaseValue just like other value types:

let dbValue: DatabaseValue = row[0]
let dbValue: DatabaseValue? = row["name"] // nil if and only if column does not exist

// Check for NULL:
dbValue.isNull // Bool

// The stored value:
dbValue.storage.value // Int64, Double, String, Data, or nil

// All the five storage classes supported by SQLite:
switch dbValue.storage {
case .null:                 print("NULL")
case .int64(let int64):     print("Int64: \(int64)")
case .double(let double):   print("Double: \(double)")
case .string(let string):   print("String: \(string)")
case .blob(let data):       print("Data: \(data)")
}

You can extract regular values (Bool, Int, String, Date, Swift enums, etc.) from DatabaseValue with the DatabaseValueConvertible.fromDatabaseValue() method:

let dbValue: DatabaseValue = row["bookCount"]
let bookCount   = Int.fromDatabaseValue(dbValue)   // Int?
let bookCount64 = Int64.fromDatabaseValue(dbValue) // Int64?
let hasBooks    = Bool.fromDatabaseValue(dbValue)  // Bool?, false when 0

let dbValue: DatabaseValue = row["date"]
let string = String.fromDatabaseValue(dbValue)     // "2015-09-11 18:14:15.123"
let date   = Date.fromDatabaseValue(dbValue)       // Date?

fromDatabaseValue returns nil for invalid conversions:

let row = try Row.fetchOne(db, sql: "SELECT 'Mom’s birthday'")!
let dbValue: DatabaseValue = row[0]
let string = String.fromDatabaseValue(dbValue) // "Mom’s birthday"
let int    = Int.fromDatabaseValue(dbValue)    // nil
let date   = Date.fromDatabaseValue(dbValue)   // nil

Rows as Dictionaries

Row adopts the standard RandomAccessCollection protocol, and can be seen as a dictionary of DatabaseValue:

// All the (columnName, dbValue) tuples, from left to right:
for (columnName, dbValue) in row {
    ...
}

You can build rows from dictionaries (standard Swift dictionaries and NSDictionary). See Values for more information on supported types:

let row: Row = ["name": "foo", "date": nil]
let row = Row(["name": "foo", "date": nil])
let row = Row(/* [AnyHashable: Any] */) // nil if invalid dictionary

Yet rows are not real dictionaries: they may contain duplicate columns:

let row = try Row.fetchOne(db, sql: "SELECT 1 AS foo, 2 AS foo")!
row.columnNames    // ["foo", "foo"]
row.databaseValues // [1, 2]
row["foo"]         // 1 (leftmost matching column)
for (columnName, dbValue) in row { ... } // ("foo", 1), ("foo", 2)

When you build a dictionary from a row, you have to disambiguate identical columns, and choose how to present database values. For example:

  • A [String: DatabaseValue] dictionary that keeps leftmost value in case of duplicated column name:

    let dict = Dictionary(row, uniquingKeysWith: { (left, _) in left })
    
  • A [String: AnyObject] dictionary which keeps rightmost value in case of duplicated column name. This dictionary is identical to FMResultSet’s resultDictionary from FMDB. It contains NSNull values for null columns, and can be shared with Objective-C:

    let dict = Dictionary(
        row.map { (column, dbValue) in
            (column, dbValue.storage.value as AnyObject)
        },
        uniquingKeysWith: { (_, right) in right })
    
  • A [String: Any] dictionary that can feed, for example, JSONSerialization:

    let dict = Dictionary(
        row.map { (column, dbValue) in
            (column, dbValue.storage.value)
        },
        uniquingKeysWith: { (left, _) in left })
    

See the documentation of Dictionary.init(_:uniquingKeysWith:) for more information.

Value Queries

Instead of rows, you can directly fetch values. Like rows, fetch them as cursors, arrays, sets, or single values (see fetching methods). Values are extracted from the leftmost column of the SQL queries:

try dbQueue.read { db in
    try Int.fetchCursor(db, sql: "SELECT ...", arguments: ...) // A Cursor of Int
    try Int.fetchAll(db, sql: "SELECT ...", arguments: ...)    // [Int]
    try Int.fetchSet(db, sql: "SELECT ...", arguments: ...)    // Set<Int>
    try Int.fetchOne(db, sql: "SELECT ...", arguments: ...)    // Int?

    // When database may contain NULL:
    try Optional<Int>.fetchCursor(db, sql: "SELECT ...", arguments: ...) // A Cursor of Int?
    try Optional<Int>.fetchAll(db, sql: "SELECT ...", arguments: ...)    // [Int?]
    try Optional<Int>.fetchSet(db, sql: "SELECT ...", arguments: ...)    // Set<Int?>
}

let playerCount = try dbQueue.read { db in
    try Int.fetchOne(db, sql: "SELECT COUNT(*) FROM player")!
}

fetchOne returns an optional value which is nil in two cases: either the SELECT statement yielded no row, or one row with a NULL value.

There are many supported value types (Bool, Int, String, Date, Swift enums, etc.). See Values for more information:

let count = try Int.fetchOne(db, sql: "SELECT COUNT(*) FROM player")! // Int
let urls = try URL.fetchAll(db, sql: "SELECT url FROM link")          // [URL]

Values

GRDB ships with built-in support for the following value types:

Values can be used as statement arguments:

let url: URL = ...
let verified: Bool = ...
try db.execute(
    sql: "INSERT INTO link (url, verified) VALUES (?, ?)",
    arguments: [url, verified])

Values can be extracted from rows:

let rows = try Row.fetchCursor(db, sql: "SELECT * FROM link")
while let row = try rows.next() {
    let url: URL = row["url"]
    let verified: Bool = row["verified"]
}

Values can be directly fetched:

let urls = try URL.fetchAll(db, sql: "SELECT url FROM link")  // [URL]

Use values in Records:

struct Link: FetchableRecord {
    var url: URL
    var isVerified: Bool

    init(row: Row) {
        url = row["url"]
        isVerified = row["verified"]
    }
}

Use values in the query interface:

let url: URL = ...
let link = try Link.filter(Column("url") == url).fetchOne(db)

Data (and Memory Savings)

Data suits the BLOB SQLite columns. It can be stored and fetched from the database just like other values:

let rows = try Row.fetchCursor(db, sql: "SELECT data, ...")
while let row = try rows.next() {
    let data: Data = row["data"]
}

At each step of the request iteration, the row[] subscript creates two copies of the database bytes: one fetched by SQLite, and another, stored in the Swift Data value.

You have the opportunity to save memory by not copying the data fetched by SQLite:

while let row = try rows.next() {
    let data = row.dataNoCopy(named: "data") // Data?
}

The non-copied data does not live longer than the iteration step: make sure that you do not use it past this point.

Date and DateComponents

Date and DateComponents can be stored and fetched from the database.

Here is how GRDB supports the various date formats supported by SQLite:

SQLite format Date DateComponents
YYYY-MM-DD Read ¹ Read / Write
YYYY-MM-DD HH:MM Read ¹ ² Read ² / Write
YYYY-MM-DD HH:MM:SS Read ¹ ² Read ² / Write
YYYY-MM-DD HH:MM:SS.SSS Read ¹ ² / Write ¹ Read ² / Write
YYYY-MM-DD*T*HH:MM Read ¹ ² Read ²
YYYY-MM-DD*T*HH:MM:SS Read ¹ ² Read ²
YYYY-MM-DD*T*HH:MM:SS.SSS Read ¹ ² Read ²
HH:MM Read ² / Write
HH:MM:SS Read ² / Write
HH:MM:SS.SSS Read ² / Write
Timestamps since unix epoch Read ³
now

¹ Missing components are assumed to be zero. Dates are stored and read in the UTC time zone, unless the format is followed by a timezone indicator ⁽²⁾.

² This format may be optionally followed by a timezone indicator of the form [+-]HH:MM or just Z.

³ GRDB 2+ interprets numerical values as timestamps that fuel Date(timeIntervalSince1970:). Previous GRDB versions used to interpret numbers as julian days. Julian days are still supported, with the Date(julianDay:) initializer.

:warning: Warning: the range of valid years in the SQLite date formats is 0000-9999. You will need to pick another date format when your application needs to process years outside of this range. See the following chapters.

Date

Date can be stored and fetched from the database just like other values:

try db.execute(
    sql: "INSERT INTO player (creationDate, ...) VALUES (?, ...)",
    arguments: [Date(), ...])

let row = try Row.fetchOne(db, ...)!
let creationDate: Date = row["creationDate"]

Dates are stored using the format “YYYY-MM-DD HH:MM:SS.SSS” in the UTC time zone. It is precise to the millisecond.

:point_up: Note: this format was chosen because it is the only format that is:

  • Comparable (ORDER BY date works)
  • Comparable with the SQLite keyword CURRENT_TIMESTAMP (WHERE date > CURRENT_TIMESTAMP works)
  • Able to feed SQLite date & time functions
  • Precise enough

:warning: Warning: the range of valid years in the SQLite date format is 0000-9999. You will experience problems with years outside of this range, such as decoding errors, or invalid date computations with SQLite date & time functions.

Some applications may prefer another date format:

  • Some may prefer ISO-8601, with a T separator.
  • Some may prefer ISO-8601, with a time zone.
  • Some may need to store years beyond the 0000-9999 range.
  • Some may need sub-millisecond precision.
  • Some may need exact Date roundtrip.
  • Etc.

You should think twice before choosing a different date format:

  • ISO-8601 is about exchange and communication, when SQLite is about storage and data manipulation. Sharing the same representation in your database and in JSON files only provides a superficial convenience, and should be the least of your priorities. Don’t store dates as ISO-8601 without understanding what you lose. For example, ISO-8601 time zones forbid database-level date comparison.
  • Sub-millisecond precision and exact Date roundtrip are not as obvious needs as it seems at first sight. Dates generally don’t precisely roundtrip as soon as they leave your application anyway, because the other systems your app communicates with use their own date representation (the Android version of your app, the server your application is talking to, etc.) On top of that, Date comparison is at least as hard and nasty as floating point comparison.

The customization of date format is explicit. For example:

let date = Date()
let timeInterval = date.timeIntervalSinceReferenceDate
try db.execute(
    sql: "INSERT INTO player (creationDate, ...) VALUES (?, ...)",
    arguments: [timeInterval, ...])

if let row = try Row.fetchOne(db, ...) {
    let timeInterval: TimeInterval = row["creationDate"]
    let creationDate = Date(timeIntervalSinceReferenceDate: timeInterval)
}

See also Codable Records for more date customization options, and DatabaseValueConvertible if you want to define a Date-wrapping type with customized database representation.

DateComponents

DateComponents is indirectly supported, through the DatabaseDateComponents helper type.

DatabaseDateComponents reads date components from all date formats supported by SQLite, and stores them in the format of your choice, from HH:MM to YYYY-MM-DD HH:MM:SS.SSS.

:warning: Warning: the range of valid years is 0000-9999. You will experience problems with years outside of this range, such as decoding errors, or invalid date computations with SQLite date & time functions. See Date for more information.

DatabaseDateComponents can be stored and fetched from the database just like other values:

let components = DateComponents()
components.year = 1973
components.month = 9
components.day = 18

// Store "1973-09-18"
let dbComponents = DatabaseDateComponents(components, format: .YMD)
try db.execute(
    sql: "INSERT INTO player (birthDate, ...) VALUES (?, ...)",
    arguments: [dbComponents, ...])

// Read "1973-09-18"
let row = try Row.fetchOne(db, sql: "SELECT birthDate ...")!
let dbComponents: DatabaseDateComponents = row["birthDate"]
dbComponents.format         // .YMD (the actual format found in the database)
dbComponents.dateComponents // DateComponents

NSNumber, NSDecimalNumber, and Decimal

NSNumber and Decimal can be stored and fetched from the database just like other values.

Here is how GRDB supports the various data types supported by SQLite:

Integer Double String
NSNumber Read / Write Read / Write Read
NSDecimalNumber Read / Write Read / Write Read
Decimal Read Read Read / Write
  • All three types can decode database integers and doubles:

    let number = try NSNumber.fetchOne(db, sql: "SELECT 10")            // NSNumber
    let number = try NSDecimalNumber.fetchOne(db, sql: "SELECT 1.23")   // NSDecimalNumber
    let number = try Decimal.fetchOne(db, sql: "SELECT -100")           // Decimal
    
  • All three types decode database strings as decimal numbers:

    let number = try NSNumber.fetchOne(db, sql: "SELECT '10'")          // NSDecimalNumber (sic)
    let number = try NSDecimalNumber.fetchOne(db, sql: "SELECT '1.23'") // NSDecimalNumber
    let number = try Decimal.fetchOne(db, sql: "SELECT '-100'")         // Decimal
    
  • NSNumber and NSDecimalNumber send 64-bit signed integers and doubles in the database:

    // INSERT INTO transfer VALUES (10)
    try db.execute(sql: "INSERT INTO transfer VALUES (?)", arguments: [NSNumber(value: 10)])
    
    // INSERT INTO transfer VALUES (10.0)
    try db.execute(sql: "INSERT INTO transfer VALUES (?)", arguments: [NSNumber(value: 10.0)])
    
    // INSERT INTO transfer VALUES (10)
    try db.execute(sql: "INSERT INTO transfer VALUES (?)", arguments: [NSDecimalNumber(string: "10.0")])
    
    // INSERT INTO transfer VALUES (10.5)
    try db.execute(sql: "INSERT INTO transfer VALUES (?)", arguments: [NSDecimalNumber(string: "10.5")])
    

    :warning: Warning: since SQLite does not support decimal numbers, sending a non-integer NSDecimalNumber can result in a loss of precision during the conversion to double.

    Instead of sending non-integer NSDecimalNumber to the database, you may prefer:

    • Send Decimal instead (those store decimal strings in the database).
    • Send integers instead (for example, store amounts of cents instead of amounts of Euros).
  • Decimal sends decimal strings in the database:

    // INSERT INTO transfer VALUES ('10')
    try db.execute(sql: "INSERT INTO transfer VALUES (?)", arguments: [Decimal(10)])
    
    // INSERT INTO transfer VALUES ('10.5')
    try db.execute(sql: "INSERT INTO transfer VALUES (?)", arguments: [Decimal(string: "10.5")!])
    

UUID

UUID can be stored and fetched from the database just like other values.

GRDB stores uuids as 16-bytes data blobs, and decodes them from both 16-bytes data blobs and strings such as “E621E1F8-C36C-495A-93FC-0C247A3E6E5F”.

Swift Enums

Swift enums and generally all types that adopt the RawRepresentable protocol can be stored and fetched from the database just like their raw values:

enum Color : Int {
    case red, white, rose
}

enum Grape : String {
    case chardonnay, merlot, riesling
}

// Declare empty DatabaseValueConvertible adoption
extension Color : DatabaseValueConvertible { }
extension Grape : DatabaseValueConvertible { }

// Store
try db.execute(
    sql: "INSERT INTO wine (grape, color) VALUES (?, ?)",
    arguments: [Grape.merlot, Color.red])

// Read
let rows = try Row.fetchCursor(db, sql: "SELECT * FROM wine")
while let row = try rows.next() {
    let grape: Grape = row["grape"]
    let color: Color = row["color"]
}

When a database value does not match any enum case, you get a fatal error. This fatal error can be avoided with the DatabaseValue type:

let row = try Row.fetchOne(db, sql: "SELECT 'syrah'")!

row[0] as String  // "syrah"
row[0] as Grape?  // fatal error: could not convert "syrah" to Grape.
row[0] as Grape   // fatal error: could not convert "syrah" to Grape.

let dbValue: DatabaseValue = row[0]
if dbValue.isNull {
    // Handle NULL
} else if let grape = Grape.fromDatabaseValue(dbValue) {
    // Handle valid grape
} else {
    // Handle unknown grape
}

Custom Value Types

Conversion to and from the database is based on the DatabaseValueConvertible protocol:

protocol DatabaseValueConvertible {
    /// Returns a value that can be stored in the database.
    var databaseValue: DatabaseValue { get }

    /// Returns a value initialized from dbValue, if possible.
    static func fromDatabaseValue(_ dbValue: DatabaseValue) -> Self?
}

All types that adopt this protocol can be used like all other values (Bool, Int, String, Date, Swift enums, etc.)

The databaseValue property returns DatabaseValue, a type that wraps the five values supported by SQLite: NULL, Int64, Double, String and Data. Since DatabaseValue has no public initializer, use DatabaseValue.null, or another type that already adopts the protocol: 1.databaseValue, "foo".databaseValue, etc. Conversion to DatabaseValue must not fail.

The fromDatabaseValue() factory method returns an instance of your custom type if the database value contains a suitable value. If the database value does not contain a suitable value, such as “foo” for Date, fromDatabaseValue must return nil (GRDB will interpret this nil result as a conversion error, and react accordingly).

Value types that adopt both DatabaseValueConvertible and an archival protocol (Codable, Encodable or Decodable) are automatically coded and decoded from JSON arrays and objects:

// Encoded as a JSON object in the database:
struct Color: Codable, DatabaseValueConvertible {
    var r: Double
    var g: Double
    var b: Double
}

For such codable value types, GRDB uses the standard JSONDecoder and JSONEncoder from Foundation. By default, Data values are handled with the .base64 strategy, Date with the .millisecondsSince1970 strategy, and non conforming floats with the .throw strategy.

In order to customize the JSON format, provide a custom implementation of the DatabaseValueConvertible requirements.

:point_up: Note: standard sequences such as Array, Set, or Dictionary do not conform to DatabaseValueConvertible, even conditionally. You won’t be able to directly fetch or store arrays, sets, or dictionaries as JSON database values. You can get free JSON support from these standard types when they are embedded as properties of Codable Records, though.

Transactions and Savepoints

Transactions and Safety

A transaction is a fundamental tool of SQLite that guarantees data consistency as well as proper isolation between application threads and database connections.

GRDB generally opens transactions for you, as a way to enforce its concurrency guarantees, and provide maximal security for both your application data and application logic:

// BEGIN TRANSACTION
// INSERT INTO credit ...
// INSERT INTO debit ...
// COMMIT
try dbQueue.write { db in
    try Credit(destinationAccount, amount).insert(db)
    try Debit(sourceAccount, amount).insert(db)
}

// BEGIN TRANSACTION
// INSERT INTO credit ...
// INSERT INTO debit ...
// COMMIT
try dbPool.write { db in
    try Credit(destinationAccount, amount).insert(db)
    try Debit(sourceAccount, amount).insert(db)
}

Yet you may need to exactly control when transactions take place:

Explicit Transactions

DatabaseQueue.inDatabase() and DatabasePool.writeWithoutTransaction() execute your database statements outside of any transaction:

// INSERT INTO credit ...
// INSERT INTO debit ...
try dbQueue.inDatabase { db in
    try Credit(destinationAccount, amount).insert(db)
    try Debit(sourceAccount, amount).insert(db)
}

// INSERT INTO credit ...
// INSERT INTO debit ...
try dbPool.writeWithoutTransaction { db in
    try Credit(destinationAccount, amount).insert(db)
    try Debit(sourceAccount, amount).insert(db)
}

Writing outside of any transaction is dangerous, for two reasons:

  • In our credit/debit example, you may successfully insert a credit, but fail inserting the debit, and end up with unbalanced accounts (oops).

    // UNSAFE DATABASE INTEGRITY
    try dbQueue.inDatabase { db in // or dbPool.writeWithoutTransaction
        try Credit(destinationAccount, amount).insert(db) // may succeed
        try Debit(sourceAccount, amount).insert(db)      // may fail
    }
    

    Transactions avoid this kind of bug.

  • Database pool concurrent reads can see an inconsistent state of the database:

    // UNSAFE CONCURRENCY
    try dbPool.writeWithoutTransaction { db in
        try Credit(destinationAccount, amount).insert(db)
        // <- Concurrent dbPool.read sees a partial db update here
        try Debit(sourceAccount, amount).insert(db)
    }
    

    Transactions avoid this kind of bug, too.

To open explicit transactions, use one of the Database.inTransaction, DatabaseQueue.inTransaction, or DatabasePool.writeInTransaction methods:

// BEGIN TRANSACTION
// INSERT INTO credit ...
// INSERT INTO debit ...
// COMMIT
try dbQueue.inDatabase { db in  // or dbPool.writeWithoutTransaction
    try db.inTransaction {
        try Credit(destinationAccount, amount).insert(db)
        try Debit(sourceAccount, amount).insert(db)
        return .commit
    }
}

// BEGIN TRANSACTION
// INSERT INTO credit ...
// INSERT INTO debit ...
// COMMIT
try dbQueue.inTransaction { db in  // or dbPool.writeInTransaction
    try Credit(destinationAccount, amount).insert(db)
    try Debit(sourceAccount, amount).insert(db)
    return .commit
}

If an error is thrown from the transaction block, the transaction is rollbacked and the error is rethrown by the inTransaction method. If you return .rollback instead of .commit, the transaction is also rollbacked, but no error is thrown.

You can also perform manual transaction management:

try dbQueue.inDatabase { db in  // or dbPool.writeWithoutTransaction
    try db.beginTransaction()
    ...
    try db.commit()

    try db.execute(sql: "BEGIN TRANSACTION")
    ...
    try db.execute(sql: "ROLLBACK")
}

Transactions can’t be left opened unless you set the allowsUnsafeTransactions configuration flag:

// fatal error: A transaction has been left opened at the end of a database access
try dbQueue.inDatabase { db in
    try db.execute(sql: "BEGIN TRANSACTION")
    // <- no commit or rollback
}

You can ask if a transaction is currently opened:

func myCriticalMethod(_ db: Database) throws {
    precondition(db.isInsideTransaction, "This method requires a transaction")
    try ...
}

Yet, you have a better option than checking for transactions: critical database sections should use savepoints, described below:

func myCriticalMethod(_ db: Database) throws {
    try db.inSavepoint {
        // Here the database is guaranteed to be inside a transaction.
        try ...
    }
}

Savepoints

Statements grouped in a savepoint can be rollbacked without invalidating a whole transaction:

try dbQueue.write { db in
    // Makes sure both inserts succeed, or none:
    try db.inSavepoint {
        try Credit(destinationAccount, amount).insert(db)
        try Debit(sourceAccount, amount).insert(db)
        return .commit
    }

    // Other savepoints, etc...
}

If an error is thrown from the savepoint block, the savepoint is rollbacked and the error is rethrown by the inSavepoint method. If you return .rollback instead of .commit, the savepoint is also rollbacked, but no error is thrown.

Unlike transactions, savepoints can be nested. They implicitly open a transaction if no one was opened when the savepoint begins. As such, they behave just like nested transactions. Yet the database changes are only written to disk when the outermost transaction is committed:

try dbQueue.inDatabase { db in
    try db.inSavepoint {
        ...
        try db.inSavepoint {
            ...
            return .commit
        }
        ...
        return .commit  // writes changes to disk
    }
}

SQLite savepoints are more than nested transactions, though. For advanced uses, use SQLite savepoint documentation.

Transaction Kinds

SQLite supports three kinds of transactions: deferred (the default), immediate, and exclusive.

The transaction kind can be changed in the database configuration, or for each transaction:

// 1) Default configuration:
let dbQueue = try DatabaseQueue(path: "...")

// BEGIN DEFERRED TRANSACTION ...
dbQueue.write { db in ... }

// BEGIN EXCLUSIVE TRANSACTION ...
dbQueue.inTransaction(.exclusive) { db in ... }

// 2) Customized default transaction kind:
var config = Configuration()
config.defaultTransactionKind = .immediate
let dbQueue = try DatabaseQueue(path: "...", configuration: config)

// BEGIN IMMEDIATE TRANSACTION ...
dbQueue.write { db in ... }

// BEGIN EXCLUSIVE TRANSACTION ...
dbQueue.inTransaction(.exclusive) { db in ... }

Prepared Statements

Prepared Statements let you prepare an SQL query and execute it later, several times if you need, with different arguments.

try dbQueue.write { db in
    let insertSQL = "INSERT INTO player (name, score) VALUES (:name, :score)"
    let insertStatement = try db.makeStatement(sql: insertSQL)

    let selectSQL = "SELECT * FROM player WHERE name = ?"
    let selectStatement = try db.makeStatement(sql: selectSQL)
}

The ? and colon-prefixed keys like :name in the SQL query are the statement arguments. You set them with arrays or dictionaries (arguments are actually of type StatementArguments, which happens to adopt the ExpressibleByArrayLiteral and ExpressibleByDictionaryLiteral protocols).

insertStatement.arguments = ["name": "Arthur", "score": 1000]
selectStatement.arguments = ["Arthur"]

Alternatively, you can create a prepared statement with SQL Interpolation:

let insertStatement = try db.makeStatement(literal: "INSERT ...")
let selectStatement = try db.makeStatement(literal: "SELECT ...")
//                                         ~~~~~~~

Statements can be executed:

try insertStatement.execute()

Statements can be used wherever a raw SQL query string would fit (see fetch queries):

let rows = try Row.fetchCursor(selectStatement)    // A Cursor of Row
let players = try Player.fetchAll(selectStatement) // [Player]
let players = try Player.fetchSet(selectStatement) // Set<Player>
let player = try Player.fetchOne(selectStatement)  // Player?

You can set the arguments at the moment of the statement execution:

try insertStatement.execute(arguments: ["name": "Arthur", "score": 1000])
let player = try Player.fetchOne(selectStatement, arguments: ["Arthur"])

When you want to build multiple statements joined with a semicolon, use the allStatements method:

let statements = try db.allStatements(sql: """
    INSERT INTO player (name, score) VALUES (?, ?);
    INSERT INTO player (name, score) VALUES (?, ?);
    """, arguments: ["Arthur", 100, "O'Brien", 1000])
while let statement = try statements.next() {
    try statement.execute()
}

allStatements also supports SQL Interpolation:

let statements = try db.allStatements(literal: """
    INSERT INTO player (name, score) VALUES (\("Arthur"), \(100));
    INSERT INTO player (name, score) VALUES (\("O'Brien"), \(1000));
    """)
while let statement = try statements.next() {
    try statement.execute()
}

You can turn the cursor returned from allStatements into a regular Swift array, but in this case make sure all individual statements can compile even if the previous ones were not run:

// OK: Array of statements
let statements = try Array(db.allStatements(sql: """
    INSERT ...; 
    UPDATE ...; 
    SELECT ...;
    """))

// FAILURE: Can't build an array of statements since 
// the INSERT won't compile until CREATE TABLE is run.
let statements = try Array(db.allStatements(sql: """
    CREATE TABLE player ...; 
    INSERT INTO player ...;
    """))

See also Database.execute(sql:) in the Executing Updates chapter.

:point_up: Note: it is a programmer error to reuse a prepared statement that has failed: GRDB may crash if you do so.

For more information about prepared statements, see the Statement reference.

Prepared Statements Cache

When the same query will be used several times in the lifetime of your application, you may feel a natural desire to cache prepared statements.

Don’t cache statements yourself.

:point_up: Note: This is because you don’t have the necessary tools. Statements are tied to specific SQLite connections and dispatch queues which you don’t manage yourself, especially when you use database pools. A change in the database schema may, or may not invalidate a statement.

Instead, use the cachedStatement method. GRDB does all the hard caching and memory management stuff for you:

let statement = try db.cachedStatement(sql: sql)

Cached statements also support SQL Interpolation:

let statement = try db.cachedStatement(literal: "INSERT ...")
//                                     ~~~~~~~

:warning: Warning: Should a cached prepared statement throw an error, don’t reuse it (it is a programmer error). Instead, reload one from the cache.

Custom SQL Functions and Aggregates

SQLite lets you define SQL functions and aggregates.

A custom SQL function or aggregate extends SQLite:

SELECT reverse(name) FROM player;   -- custom function
SELECT maxLength(name) FROM player; -- custom aggregate

Custom SQL Functions

A function argument takes an array of DatabaseValue, and returns any valid value (Bool, Int, String, Date, Swift enums, etc.) The number of database values is guaranteed to be argumentCount.

SQLite has the opportunity to perform additional optimizations when functions are “pure”, which means that their result only depends on their arguments. So make sure to set the pure argument to true when possible.

let reverse = DatabaseFunction("reverse", argumentCount: 1, pure: true) { (values: [DatabaseValue]) in
    // Extract string value, if any...
    guard let string = String.fromDatabaseValue(values[0]) else {
        return nil
    }
    // ... and return reversed string:
    return String(string.reversed())
}

You make a function available to a database connection through its configuration:

var config = Configuration()
config.prepareDatabase { db in
    db.add(function: reverse)
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

try dbQueue.read { db in
    // "oof"
    try String.fetchOne(db, sql: "SELECT reverse('foo')")!
}

Functions can take a variable number of arguments:

When you don’t provide any explicit argumentCount, the function can take any number of arguments:

let averageOf = DatabaseFunction("averageOf", pure: true) { (values: [DatabaseValue]) in
    let doubles = values.compactMap { Double.fromDatabaseValue($0) }
    return doubles.reduce(0, +) / Double(doubles.count)
}
db.add(function: averageOf)

// 2.0
try Double.fetchOne(db, sql: "SELECT averageOf(1, 2, 3)")!

Functions can throw:

let sqrt = DatabaseFunction("sqrt", argumentCount: 1, pure: true) { (values: [DatabaseValue]) in
    guard let double = Double.fromDatabaseValue(values[0]) else {
        return nil
    }
    guard double >= 0 else {
        throw DatabaseError(message: "invalid negative number")
    }
    return sqrt(double)
}
db.add(function: sqrt)

// SQLite error 1 with statement `SELECT sqrt(-1)`: invalid negative number
try Double.fetchOne(db, sql: "SELECT sqrt(-1)")!

Use custom functions in the query interface:

// SELECT reverseString("name") FROM player
Player.select(reverseString(nameColumn))

GRDB ships with built-in SQL functions that perform unicode-aware string transformations. See Unicode.

Custom Aggregates

Before registering a custom aggregate, you need to define a type that adopts the DatabaseAggregate protocol:

protocol DatabaseAggregate {
    // Initializes an aggregate
    init()

    // Called at each step of the aggregation
    mutating func step(_ dbValues: [DatabaseValue]) throws

    // Returns the final result
    func finalize() throws -> DatabaseValueConvertible?
}

For example:

struct MaxLength : DatabaseAggregate {
    var maxLength: Int = 0

    mutating func step(_ dbValues: [DatabaseValue]) {
        // At each step, extract string value, if any...
        guard let string = String.fromDatabaseValue(dbValues[0]) else {
            return
        }
        // ... and update the result
        let length = string.count
        if length > maxLength {
            maxLength = length
        }
    }

    func finalize() -> DatabaseValueConvertible? {
        maxLength
    }
}

let maxLength = DatabaseFunction(
    "maxLength",
    argumentCount: 1,
    pure: true,
    aggregate: MaxLength.self)

Like custom SQL Functions, you make an aggregate function available to a database connection through its configuration:

var config = Configuration()
config.prepareDatabase { db in
    db.add(function: maxLength)
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

try dbQueue.read { db in
    // Some Int
    try Int.fetchOne(db, sql: "SELECT maxLength(name) FROM player")!
}

The step method of the aggregate takes an array of DatabaseValue. This array contains as many values as the argumentCount parameter (or any number of values, when argumentCount is omitted).

The finalize method of the aggregate returns the final aggregated value (Bool, Int, String, Date, Swift enums, etc.).

SQLite has the opportunity to perform additional optimizations when aggregates are “pure”, which means that their result only depends on their inputs. So make sure to set the pure argument to true when possible.

Use custom aggregates in the query interface:

// SELECT maxLength("name") FROM player
let request = Player.select(maxLength.apply(nameColumn))
try Int.fetchOne(db, request) // Int?

Database Schema Introspection

GRDB comes with a set of schema introspection methods:

try dbQueue.read { db in
    // Bool, true if the table exists
    try db.tableExists("player")

    // [ColumnInfo], the columns in the table
    try db.columns(in: "player")

    // PrimaryKeyInfo
    try db.primaryKey("player")

    // [ForeignKeyInfo], the foreign keys defined on the table
    try db.foreignKeys(on: "player")

    // [IndexInfo], the indexes defined on the table
    try db.indexes(on: "player")

    // Bool, true if column(s) is a unique key (primary key or unique index)
    try db.table("player", hasUniqueKey: ["email"])
}

// Bool, true if argument is the name of an internal SQLite table
Database.isSQLiteInternalTable(...)

// Bool, true if argument is the name of an internal GRDB table
Database.isGRDBInternalTable(...)

Row Adapters

Row adapters let you present database rows in the way expected by the row consumers.

They basically help two incompatible row interfaces to work together. For example, a row consumer expects a column named “consumed”, but the produced row has a column named “produced”.

In this case, the ColumnMapping row adapter comes in handy:

// Turn the 'produced' column into 'consumed':
let adapter = ColumnMapping(["consumed": "produced"])
let row = try Row.fetchOne(db, sql: "SELECT 'Hello' AS produced", adapter: adapter)!

// [consumed:"Hello"]
print(row)

// "Hello"
print(row["consumed"])

// ▿ [consumed:"Hello"]
//   unadapted: [produced:"Hello"]
print(row.debugDescription)

// [produced:"Hello"]
print(row.unadapted)

Record types are typical row consumers that expect database rows to have a specific layout so that they can decode them:

struct MyRecord: Decodable, FetchableRecord {
    var consumed: String
}
let record = try MyRecord.fetchOne(db, sql: "SELECT 'Hello' AS produced", adapter: adapter)!
print(record.consumed) // "Hello"

There are several situations where row adapters are useful:

  • They help disambiguate columns with identical names, which may happen when you select columns from several tables. See Joined Queries Support for an example.

  • They help when SQLite outputs unexpected column names, which may happen with some subqueries. See RenameColumnAdapter for an example.

Available row adapters are described below.

ColumnMapping

ColumnMapping renames columns. Build one with a dictionary whose keys are adapted column names, and values the column names in the raw row:

// [newA:0, newB:1]
let adapter = ColumnMapping(["newA": "a", "newB": "b"])
let row = try Row.fetchOne(db, sql: "SELECT 0 AS a, 1 AS b, 2 AS c", adapter: adapter)!

Note that columns that are not present in the dictionary are not present in the resulting adapted row.

EmptyRowAdapter

EmptyRowAdapter hides all columns.

let adapter = EmptyRowAdapter()
let row = try Row.fetchOne(db, sql: "SELECT 0 AS a, 1 AS b, 2 AS c", adapter: adapter)!
row.isEmpty // true

This limit adapter may turn out useful in some narrow use cases. You’ll be happy to find it when you need it.

RangeRowAdapter

RangeRowAdapter only exposes a range of columns.

// [b:1]
let adapter = RangeRowAdapter(1..<2)
let row = try Row.fetchOne(db, sql: "SELECT 0 AS a, 1 AS b, 2 AS c", adapter: adapter)!

RenameColumnAdapter

RenameColumnAdapter lets you transform column names with a function:

// [arrr:0, brrr:1, crrr:2]
let adapter = RenameColumnAdapter { column in column + "rrr" }
let row = try Row.fetchOne(db, sql: "SELECT 0 AS a, 1 AS b, 2 AS c", adapter: adapter)!

This adapter may turn out useful, for example, when subqueries contain duplicated column names:

let sql = "SELECT * FROM (SELECT 1 AS id, 2 AS id)"

// Prints ["id", "id:1"]
// Note the "id:1" column, generated by SQLite.
let row = try Row.fetchOne(db, sql: sql)!
print(Array(row.columnNames))

// Drop the `:...` suffix, and prints ["id", "id"]
let adapter = RenameColumnAdapter { String($0.prefix(while: { $0 != ":" })) }
let adaptedRow = try Row.fetchOne(db, sql: sql, adapter: adapter)!
print(Array(adaptedRow.columnNames))

ScopeAdapter

ScopeAdapter defines row scopes:

let adapter = ScopeAdapter([
    "left": RangeRowAdapter(0..<2),
    "right": RangeRowAdapter(2..<4)])
let row = try Row.fetchOne(db, sql: "SELECT 0 AS a, 1 AS b, 2 AS c, 3 AS d", adapter: adapter)!

ScopeAdapter does not change the columns and values of the fetched row. Instead, it defines scopes, which you access through the Row.scopes property:

row                   // [a:0 b:1 c:2 d:3]
row.scopes["left"]    // [a:0 b:1]
row.scopes["right"]   // [c:2 d:3]
row.scopes["missing"] // nil

Scopes can be nested:

let adapter = ScopeAdapter([
    "left": ScopeAdapter([
        "left": RangeRowAdapter(0..<1),
        "right": RangeRowAdapter(1..<2)]),
    "right": ScopeAdapter([
        "left": RangeRowAdapter(2..<3),
        "right": RangeRowAdapter(3..<4)])
    ])
let row = try Row.fetchOne(db, sql: "SELECT 0 AS a, 1 AS b, 2 AS c, 3 AS d", adapter: adapter)!

let leftRow = row.scopes["left"]!
leftRow.scopes["left"]  // [a:0]
leftRow.scopes["right"] // [b:1]

let rightRow = row.scopes["right"]!
rightRow.scopes["left"]  // [c:2]
rightRow.scopes["right"] // [d:3]

Any adapter can be extended with scopes:

let baseAdapter = RangeRowAdapter(0..<2)
let adapter = ScopeAdapter(base: baseAdapter, scopes: [
    "remainder": SuffixRowAdapter(fromIndex: 2)])
let row = try Row.fetchOne(db, sql: "SELECT 0 AS a, 1 AS b, 2 AS c, 3 AS d", adapter: adapter)!

row // [a:0 b:1]
row.scopes["remainder"] // [c:2 d:3]

To see how ScopeAdapter can be used, see Joined Queries Support.

SuffixRowAdapter

SuffixRowAdapter hides the first columns in a row:

// [b:1 c:2]
let adapter = SuffixRowAdapter(fromIndex: 1)
let row = try Row.fetchOne(db, sql: "SELECT 0 AS a, 1 AS b, 2 AS c", adapter: adapter)!

Raw SQLite Pointers

If not all SQLite APIs are exposed in GRDB, you can still use the SQLite C Interface and call SQLite C functions.

Those functions are embedded right into the GRDB module, regardless of the underlying SQLite implementation (system SQLite, SQLCipher, or custom SQLite build):

import GRDB

let sqliteVersion = String(cString: sqlite3_libversion())

Raw pointers to database connections and statements are available through the Database.sqliteConnection and Statement.sqliteStatement properties:

try dbQueue.read { db in
    // The raw pointer to a database connection:
    let sqliteConnection = db.sqliteConnection

    // The raw pointer to a statement:
    let statement = try db.makeStatement(sql: "SELECT ...")
    let sqliteStatement = statement.sqliteStatement
}

:point_up: Notes

  • Those pointers are owned by GRDB: don’t close connections or finalize statements created by GRDB.
  • GRDB opens SQLite connections in the “multi-thread mode”, which (oddly) means that they are not thread-safe. Make sure you touch raw databases and statements inside their dedicated dispatch queues.
  • Use the raw SQLite C Interface at your own risk. GRDB won’t prevent you from shooting yourself in the foot.

Records

On top of the SQLite API, GRDB provides protocols and a class that help manipulating database rows as regular objects named “records”:

try dbQueue.write { db in
    if var place = try Place.fetchOne(db, id: 1) {
        place.isFavorite = true
        try place.update(db)
    }
}

Of course, you need to open a database connection, and create database tables first.

To define your custom records, you subclass the ready-made Record class, or you extend your structs and classes with protocols that come with focused sets of features: fetching methods, persistence methods, record comparison…

Extending structs with record protocols is more “swifty”. Subclassing the Record class is more “classic”. You can choose either way. See some examples of record definitions, and the list of record methods for an overview.

:point_up: Note: if you are familiar with Core Data’s NSManagedObject or Realm’s Object, you may experience a cultural shock: GRDB records are not uniqued, do not auto-update, and do not lazy-load. This is both a purpose, and a consequence of protocol-oriented programming. You should read How to build an iOS application with SQLite and GRDB.swift for a general introduction.

:bulb: Tip: after you have read this chapter, check the Good Practices for Designing Record Types Guide.

:bulb: Tip: see the Demo Applications for sample apps that uses records.

Overview

Protocols and the Record Class

Records in a Glance

Inserting Records

To insert a record in the database, call the insert method:

let player = Player(name: "Arthur", email: "arthur@example.com")
try player.insert(db)

:point_right: insert is available for subclasses of the Record class, and types that adopt the PersistableRecord protocol.

Fetching Records

To fetch records from the database, call a fetching method:

let arthur = try Player.fetchOne(db,            // Player?
    sql: "SELECT * FROM players WHERE name = ?",
    arguments: ["Arthur"])

let bestPlayers = try Player                    // [Player]
    .order(Column("score").desc)
    .limit(10)
    .fetchAll(db)

let spain = try Country.fetchOne(db, id: "ES")  // Country?

:point_right: Fetching from raw SQL is available for subclasses of the Record class, and types that adopt the FetchableRecord protocol.

:point_right: Fetching without SQL, using the query interface, is available for subclasses of the Record class, and types that adopt both FetchableRecord and TableRecord protocol.

Updating Records

To update a record in the database, call the update method:

var player: Player = ...
player.score = 1000
try player.update(db)

It is possible to avoid useless updates:

// does not hit the database if score has not changed
try player.updateChanges(db) {
    $0.score = 1000
}

See the query interface for batch updates:

try Player
    .filter(Column("team") == "red")
    .updateAll(db, Column("score") += 1)

:point_right: update methods are available for subclasses of the Record class, and types that adopt the PersistableRecord protocol. Batch updates are available on the TableRecord protocol.

Deleting Records

To delete a record in the database, call the delete method:

let player: Player = ...
try player.delete(db)

You can also delete by primary key, unique key, or perform batch deletes (see Delete Requests):

try Player.deleteOne(db, id: 1)
try Player.deleteOne(db, key: ["email": "arthur@example.com"])
try Country.deleteAll(db, ids: ["FR", "US"])
try Player
    .filter(Column("email") == nil)
    .deleteAll(db)

:point_right: delete methods are available for subclasses of the Record class, and types that adopt the PersistableRecord protocol. Batch deletes are available on the TableRecord protocol.

Counting Records

To count records, call the fetchCount method:

let playerCount: Int = try Player.fetchCount(db)

let playerWithEmailCount: Int = try Player
    .filter(Column("email") == nil)
    .fetchCount(db)

:point_right: fetchCount is available for subclasses of the Record class, and types that adopt the TableRecord protocol.

Details follow:

Record Protocols Overview

GRDB ships with three record protocols. Your own types will adopt one or several of them, according to the abilities you want to extend your types with.

  • FetchableRecord is able to decode database rows.

    struct Place: FetchableRecord { ... }
    let places = try dbQueue.read { db in
        try Place.fetchAll(db, sql: "SELECT * FROM place")
    }
    

    :bulb: Tip: FetchableRecord can derive its implementation from the standard Decodable protocol. See Codable Records for more information.

    FetchableRecord can decode database rows, but it is not able to build SQL requests for you. For that, you also need TableRecord:

  • TableRecord is able to generate SQL queries:

    struct Place: TableRecord { ... }
    let placeCount = try dbQueue.read { db in
        // Generates and runs `SELECT COUNT(*) FROM place`
        try Place.fetchCount(db)
    }
    

    When a type adopts both TableRecord and FetchableRecord, it can load from those requests:

    struct Place: TableRecord, FetchableRecord { ... }
    try dbQueue.read { db in
        let places = try Place.order(Column("title")).fetchAll(db)
        let paris = try Place.fetchOne(id: 1)
    }
    
  • PersistableRecord is able to write: it can create, update, and delete rows in the database:

    struct Place : PersistableRecord { ... }
    try dbQueue.write { db in
        try Place.delete(db, id: 1)
        try Place(...).insert(db)
    }
    

    A persistable record can also compare itself against other records, and avoid useless database updates.

    :bulb: Tip: PersistableRecord can derive its implementation from the standard Encodable protocol. See Codable Records for more information.

FetchableRecord Protocol

The FetchableRecord protocol grants fetching methods to any type that can be built from a database row:

protocol FetchableRecord {
    /// Row initializer
    init(row: Row)
}

To use FetchableRecord, subclass the Record class, or adopt it explicitly. For example:

struct Place {
    var id: Int64?
    var title: String
    var coordinate: CLLocationCoordinate2D
}

extension Place : FetchableRecord {
    init(row: Row) {
        id = row["id"]
        title = row["title"]
        coordinate = CLLocationCoordinate2D(
            latitude: row["latitude"],
            longitude: row["longitude"])
    }
}

Rows also accept column enums:

extension Place : FetchableRecord {
    enum Columns: String, ColumnExpression {
        case id, title, latitude, longitude
    }

    init(row: Row) {
        id = row[Columns.id]
        title = row[Columns.title]
        coordinate = CLLocationCoordinate2D(
            latitude: row[Columns.latitude],
            longitude: row[Columns.longitude])
    }
}

See column values for more information about the row[] subscript.

When your record type adopts the standard Decodable protocol, you don’t have to provide the implementation for init(row:). See Codable Records for more information:

// That's all
struct Player: Decodable, FetchableRecord {
    var id: Int64
    var name: String
    var score: Int
}

FetchableRecord allows adopting types to be fetched from SQL queries:

try Place.fetchCursor(db, sql: "SELECT ...", arguments:...) // A Cursor of Place
try Place.fetchAll(db, sql: "SELECT ...", arguments:...)    // [Place]
try Place.fetchSet(db, sql: "SELECT ...", arguments:...)    // Set<Place>
try Place.fetchOne(db, sql: "SELECT ...", arguments:...)    // Place?

See fetching methods for information about the fetchCursor, fetchAll, fetchSet and fetchOne methods. See StatementArguments for more information about the query arguments.

:point_up: Note: for performance reasons, the same row argument to init(row:) is reused during the iteration of a fetch query. If you want to keep the row for later use, make sure to store a copy: self.row = row.copy().

:point_up: Note: The FetchableRecord.init(row:) initializer fits the needs of most applications. But some application are more demanding than others. When FetchableRecord does not exactly provide the support you need, have a look at the Beyond FetchableRecord chapter.

TableRecord Protocol

The TableRecord protocol generates SQL for you. To use TableRecord, subclass the Record class, or adopt it explicitly:

protocol TableRecord {
    static var databaseTableName: String { get }
    static var databaseSelection: [SQLSelectable] { get }
}

The databaseSelection type property is optional, and documented in the Columns Selected by a Request chapter.

The databaseTableName type property is the name of a database table. By default, it is derived from the type name:

struct Place: TableRecord { }
print(Place.databaseTableName) // prints "place"

For example:

  • Place: place
  • Country: country
  • PostalAddress: postalAddress
  • HTTPRequest: httpRequest
  • TOEFL: toefl

You can still provide a custom table name:

struct Place: TableRecord {
    static let databaseTableName = "location"
}
print(Place.databaseTableName) // prints "location"

Subclasses of the Record class must always override their superclass’s databaseTableName property:

class Place: Record {
    override class var databaseTableName: String { "place" }
}
print(Place.databaseTableName) // prints "place"

When a type adopts both TableRecord and FetchableRecord, it can be fetched using the query interface:

// SELECT * FROM place WHERE name = 'Paris'
let paris = try Place.filter(nameColumn == "Paris").fetchOne(db)

TableRecord can also fetch deal with primary and unique keys: see Fetching by Key and Testing for Record Existence.

PersistableRecord Protocol

GRDB record types can create, update, and delete rows in the database.

Those abilities are granted by three protocols:

// Defines how a record encodes itself into the database
protocol EncodableRecord {
    /// Defines the values persisted in the database
    func encode(to container: inout PersistenceContainer)
}

// Adds persistence methods
protocol MutablePersistableRecord: TableRecord, EncodableRecord {
    /// Optional method that lets your adopting type store its rowID upon
    /// successful insertion. Don't call it directly: it is called for you.
    mutating func didInsert(with rowID: Int64, for column: String?)
}

// Adds immutability
protocol PersistableRecord: MutablePersistableRecord {
    /// Non-mutating version of the optional didInsert(with:for:)
    func didInsert(with rowID: Int64, for column: String?)
}

Yes, three protocols instead of one. Here is how you pick one or the other:

  • If your type is a class, choose PersistableRecord. On top of that, implement didInsert(with:for:) if the database table has an auto-incremented primary key.

  • If your type is a struct, and the database table has an auto-incremented primary key, choose MutablePersistableRecord, and implement didInsert(with:for:).

  • Otherwise, choose PersistableRecord, and ignore didInsert(with:for:).

The encode(to:) method defines which values (Bool, Int, String, Date, Swift enums, etc.) are assigned to database columns.

The optional didInsert method lets the adopting type store its rowID after successful insertion, and is only useful for tables that have an auto-incremented primary key. It is called from a protected dispatch queue, and serialized with all database updates.

To use the persistable protocols, subclass the Record class, or adopt one of them explicitly. For example:

extension Place : MutablePersistableRecord {
    /// The values persisted in the database
    func encode(to container: inout PersistenceContainer) {
        container["id"] = id
        container["title"] = title
        container["latitude"] = coordinate.latitude
        container["longitude"] = coordinate.longitude
    }

    // Update auto-incremented id upon successful insertion
    mutating func didInsert(with rowID: Int64, for column: String?) {
        id = rowID
    }
}

var paris = Place(
    id: nil,
    title: "Paris",
    coordinate: CLLocationCoordinate2D(latitude: 48.8534100, longitude: 2.3488000))

try paris.insert(db)
paris.id   // some value

Persistence containers also accept column enums:

extension Place : MutablePersistableRecord {
    enum Columns: String, ColumnExpression {
        case id, title, latitude, longitude
    }

    func encode(to container: inout PersistenceContainer) {
        container[Columns.id] = id
        container[Columns.title] = title
        container[Columns.latitude] = coordinate.latitude
        container[Columns.longitude] = coordinate.longitude
    }
}

When your record type adopts the standard Encodable protocol, you don’t have to provide the implementation for encode(to:). See Codable Records for more information:

// That's all
struct Player: Encodable, MutablePersistableRecord {
    var id: Int64?
    var name: String
    var score: Int

    // Update auto-incremented id upon successful insertion
    mutating func didInsert(with rowID: Int64, for column: String?) {
        id = rowID
    }
}

Persistence Methods

Record subclasses and types that adopt PersistableRecord are given default implementations for methods that insert, update, and delete:

// INSERT
try place.insert(db)
let insertedPlace = try place.inserted(db) // non-mutating

// UPDATE
try place.update(db)
try place.update(db, columns: ["title"])

// Maybe UPDATE
try place.updateChanges(db, from: otherPlace)
try place.updateChanges(db) { $0.isFavorite = true }
try place.updateChanges(db) // Record class only

// INSERT or UPDATE
try place.save(db)
let savedPlace = place.saved(db) // non-mutating

// DELETE
try place.delete(db)

// EXISTENCE CHECK
let exists = try place.exists(db)

The TableRecord protocol comes with batch operations:

// UPDATE
try Place.updateAll(db, ...)

// DELETE
try Place.deleteAll(db)
try Place.deleteAll(db, ids:...)
try Place.deleteAll(db, keys:...)
try Place.deleteOne(db, id:...)
try Place.deleteOne(db, key:...)
  • All those methods can throw a DatabaseError.

  • update and updateChanges throw PersistenceError if the database does not contain any row for the primary key of the record.

  • save makes sure your values are stored in the database. It performs an UPDATE if the record has a non-null primary key, and then, if no row was modified, an INSERT. It directly performs an INSERT if the record has no primary key, or a null primary key.

  • delete and deleteOne returns whether a database row was deleted or not. deleteAll returns the number of deleted rows. updateAll returns the number of updated rows. updateChanges returns whether a database row was updated or not.

All primary keys are supported, including composite primary keys that span several columns, and the implicit rowid primary key.

Customizing the Persistence Methods

Your custom type may want to perform extra work when the persistence methods are invoked.

For example, it may want to have its UUID automatically set before inserting. Or it may want to validate its values before saving.

When you subclass Record, you simply have to override the customized method, and call super:

class Player : Record {
    var uuid: UUID?

    override func insert(_ db: Database) throws {
        if uuid == nil {
            uuid = UUID()
        }
        try super.insert(db)
    }
}

If you use the raw PersistableRecord protocol, use one of the special methods performInsert, performUpdate, performSave, performDelete, or performExists:

struct Link : PersistableRecord {
    var url: URL

    func insert(_ db: Database) throws {
        try validate()
        try performInsert(db)
    }

    func update(_ db: Database, columns: Set<String>) throws {
        try validate()
        try performUpdate(db, columns: columns)
    }

    func validate() throws {
        if url.host == nil {
            throw ValidationError("url must be absolute.")
        }
    }
}

:point_up: Note: the special methods performInsert, performUpdate, etc. are reserved for your custom implementations. Do not use them elsewhere. Do not provide another implementation for those methods.

:point_up: Note: it is recommended that you do not implement your own version of the save method. Its default implementation forwards the job to update or insert: these are the methods that may need customization, not save.

Identifiable Records

When a record type maps a table with a single-column primary key, it is recommended to have it adopt the standard Identifiable protocol.

struct Player: Identifiable, FetchableRecord, PersistableRecord {
    var id: Int64 // fulfills the Identifiable requirement
    var name: String
    var score: Int
}

When id has a database-compatible type (Int64, Int, String, UUID, …), the Identifiable conformance unlocks type-safe record and request methods:

let player = try Player.fetchOne(db, id: 1)
let players = try Player.fetchAll(db, ids: [1, 2, 3])
let players = try Player.fetchSet(db, ids: [1, 2, 3])

let request = Player.filter(id: 1)
let request = Player.filter(ids: [1, 2, 3])

let ids = try Player.selectID().fetchAll(db)
let ids = try Player.filter(...).selectID().fetchSet(db)

try Player.deleteOne(db, id: 1)
try Player.deleteAll(db, ids: [1, 2, 3])

:point_up: Note: Identifiable is not available on all application targets, and not all tables have a single-column primary key. GRDB provides other methods that deal with primary and unique keys, but they won’t check the type of their arguments:

// Those methods are not type-checked
try Player.fetchOne(db, key: 1)
try Player.fetchOne(db, key: ["email": "arthur@example.com"])
try Country.fetchAll(db, keys: ["FR", "US"])
try Citizenship.fetchOne(db, key: ["citizenId": 1, "countryCode": "FR"])

let request = Player.filter(key: 1)
let request = Player.filter(keys: [1, 2, 3])

try Player.deleteOne(db, key: 1)
try Player.deleteAll(db, keys: [1, 2, 3])

Some database tables have a single-column primary key which is not called “id”:

try db.create(table: "country") { t in
    t.column("isoCode", .text).notNull().primaryKey()
    t.column("name", .text).notNull()
    t.column("population", .integer).notNull()
}

In this case, Identifiable conformance can be achieved, for example, by returning the primary key column from the id property:

struct Country: Identifiable, FetchableRecord, PersistableRecord {
    var isoCode: String
    var name: String
    var population: Int

    // Fulfill the Identifiable requirement
    var id: String { isoCode }
}

let france = try dbQueue.read { db in
    try Country.fetchOne(db, id: "FR")
}

Codable Records

Record types that adopt an archival protocol (Codable, Encodable or Decodable) get free database support just by declaring conformance to the desired record protocols:

// Declare a record...
struct Player: Codable, FetchableRecord, PersistableRecord {
    var name: String
    var score: Int
}

// ...and there you go:
try dbQueue.write { db in
    try Player(name: "Arthur", score: 100).insert(db)
    let players = try Player.fetchAll(db)
}

Codable records encode and decode their properties according to their own implementation of the Encodable and Decodable protocols. Yet databases have specific requirements:

  • Properties are always coded according to their preferred database representation, when they have one (all values that adopt the DatabaseValueConvertible protocol).
  • You can customize the encoding and decoding of dates and uuids.
  • Complex properties (arrays, dictionaries, nested structs, etc.) are stored as JSON.

For more information about Codable records, see:

:bulb: Tip: see the Demo Applications for sample code that uses Codable records.

JSON Columns

When a Codable record contains a property that is not a simple value (Bool, Int, String, Date, Swift enums, etc.), that value is encoded and decoded as a JSON string. For example:

enum AchievementColor: String, Codable {
    case bronze, silver, gold
}

struct Achievement: Codable {
    var name: String
    var color: AchievementColor
}

struct Player: Codable, FetchableRecord, PersistableRecord {
    var name: String
    var score: Int
    var achievements: [Achievement] // stored in a JSON column
}

try! dbQueue.write { db in
    // INSERT INTO player (name, score, achievements)
    // VALUES (
    //   'Arthur',
    //   100,
    //   '[{"color":"gold","name":"Use Codable Records"}]')
    let achievement = Achievement(name: "Use Codable Records", color: .gold)
    let player = Player(name: "Arthur", score: 100, achievements: [achievement])
    try player.insert(db)
}

GRDB uses the standard JSONDecoder and JSONEncoder from Foundation. By default, Data values are handled with the .base64 strategy, Date with the .millisecondsSince1970 strategy, and non conforming floats with the .throw strategy.

You can customize the JSON format by implementing those methods:

protocol FetchableRecord {
    static func databaseJSONDecoder(for column: String) -> JSONDecoder
}

protocol EncodableRecord {
    static func databaseJSONEncoder(for column: String) -> JSONEncoder
}

:bulb: Tip: Make sure you set the JSONEncoder sortedKeys option, available from iOS 11.0+, macOS 10.13+, tvOS 9.0+ and watchOS 4.0+. This option makes sure that the JSON output is stable. This stability is required for Record Comparison to work as expected, and database observation tools such as ValueObservation to accurately recognize changed records.

Column Names Coding Strategies

By default, Codable Records store their values into database columns that match their coding keys: the teamID property is stored into the teamID column.

This behavior can be overridden, so that you can, for example, store the teamID property into the team_id column:

protocol FetchableRecord {
    static var databaseColumnDecodingStrategy: DatabaseColumnDecodingStrategy { get }
}

protocol EncodableRecord {
    static var databaseColumnEncodingStrategy: DatabaseColumnEncodingStrategy { get }
}

See DatabaseColumnDecodingStrategy and DatabaseColumnEncodingStrategy to learn about all available strategies.

Date and UUID Coding Strategies

By default, Codable Records encode and decode their Date and UUID properties as described in the general Date and DateComponents and UUID chapters.

To sum up: dates encode themselves in the “YYYY-MM-DD HH:MM:SS.SSS” format, in the UTC time zone, and decode a variety of date formats and timestamps. UUIDs encode themselves as 16-bytes data blobs, and decode both 16-bytes data blobs and strings such as “E621E1F8-C36C-495A-93FC-0C247A3E6E5F”.

Those behaviors can be overridden:

protocol FetchableRecord {
    static var databaseDateDecodingStrategy: DatabaseDateDecodingStrategy { get }
}

protocol EncodableRecord {
    static var databaseDateEncodingStrategy: DatabaseDateEncodingStrategy { get }
    static var databaseUUIDEncodingStrategy: DatabaseUUIDEncodingStrategy { get }
}

See DatabaseDateDecodingStrategy, DatabaseDateEncodingStrategy, and DatabaseUUIDEncodingStrategy to learn about all available strategies.

There is no customization of uuid decoding, because UUID can already decode all its encoded variants (16-bytes blobs and uuid strings, both uppercase and lowercase).

Customized date and uuid handling apply:

  • When encoding and decoding database rows to and from records (fetching and persistence methods).
  • In requests by single-column primary key: fetchOne(_:id:), filter(id:), deleteAll(_:keys:), etc.

They do not apply in other requests based on date or uuid values.

So make sure that dates and uuids are properly encoded in your requests. For example:

struct Player: Codable, FetchableRecord, PersistableRecord, Identifiable {
    // UUIDs are stored as strings
    static let databaseUUIDEncodingStrategy = DatabaseUUIDEncodingStrategy.uppercaseString
    var id: UUID
    ...
}

try dbQueue.write { db in
    let uuid = UUID()
    let player = Player(id: uuid, ...)

    // OK: inserts a player in the database, with a string uuid
    try player.insert(db)

    // OK: performs a string-based query, finds the inserted player
    _ = try Player.filter(id: uuid).fetchOne(db)

    // NOT OK: performs a blob-based query, fails to find the inserted player
    _ = try Player.filter(Column("id") == uuid).fetchOne(db)

    // OK: performs a string-based query, finds the inserted player
    _ = try Player.filter(Column("id") == uuid.uuidString).fetchOne(db)
}

The userInfo Dictionary

Your Codable Records can be stored in the database, but they may also have other purposes. In this case, you may need to customize their implementations of Decodable.init(from:) and Encodable.encode(to:), depending on the context.

The standard way to provide such context is the userInfo dictionary. Implement those properties:

protocol FetchableRecord {
    static var databaseDecodingUserInfo: [CodingUserInfoKey: Any] { get }
}

protocol EncodableRecord {
    static var databaseEncodingUserInfo: [CodingUserInfoKey: Any] { get }
}

For example, here is a Player type that customizes its decoding:

// A key that holds a decoder's name
let decoderName = CodingUserInfoKey(rawValue: "decoderName")!

struct Player: FetchableRecord, Decodable {
    init(from decoder: Decoder) throws {
        // Print the decoder name
        let decoderName = decoder.userInfo[decoderName] as? String
        print("Decoded from \(decoderName ?? "unknown decoder")")
        ...
    }
}

You can have a specific decoding from JSON…

// prints "Decoded from JSON"
let decoder = JSONDecoder()
decoder.userInfo = [decoderName: "JSON"]
let player = try decoder.decode(Player.self, from: jsonData)

… and another one from database rows:

extension Player: FetchableRecord {
    static let databaseDecodingUserInfo: [CodingUserInfoKey: Any] = [decoderName: "database row"]
}

// prints "Decoded from database row"
let player = try Player.fetchOne(db, ...)

:point_up: Note: make sure the databaseDecodingUserInfo and databaseEncodingUserInfo properties are explicitly declared as [CodingUserInfoKey: Any]. If they are not, the Swift compiler may silently miss the protocol requirement, resulting in sticky empty userInfo.

Tip: Derive Columns from Coding Keys

Codable types are granted with a CodingKeys enum. You can use them to safely define database columns:

struct Player: Codable {
    var id: Int64
    var name: String
    var score: Int
}

extension Player: FetchableRecord, PersistableRecord {
    enum Columns {
        static let id = Column(CodingKeys.id)
        static let name = Column(CodingKeys.name)
        static let score = Column(CodingKeys.score)
    }
}

See the query interface and Good Practices for Designing Record Types for further information.

Record Class

Record is a class that is designed to be subclassed. It inherits its features from the FetchableRecord, TableRecord, and PersistableRecord protocols. On top of that, Record instances can compare against previous versions of themselves in order to avoid useless updates.

Record subclasses define their custom database relationship by overriding database methods. For example:

class Place: Record {
    var id: Int64?
    var title: String
    var isFavorite: Bool
    var coordinate: CLLocationCoordinate2D

    init(id: Int64?, title: String, isFavorite: Bool, coordinate: CLLocationCoordinate2D) {
        self.id = id
        self.title = title
        self.isFavorite = isFavorite
        self.coordinate = coordinate
        super.init()
    }

    /// The table name
    override class var databaseTableName: String { "place" }

    /// The table columns
    enum Columns: String, ColumnExpression {
        case id, title, favorite, latitude, longitude
    }

    /// Creates a record from a database row
    required init(row: Row) {
        id = row[Columns.id]
        title = row[Columns.title]
        isFavorite = row[Columns.favorite]
        coordinate = CLLocationCoordinate2D(
            latitude: row[Columns.latitude],
            longitude: row[Columns.longitude])
        super.init(row: row)
    }

    /// The values persisted in the database
    override func encode(to container: inout PersistenceContainer) {
        container[Columns.id] = id
        container[Columns.title] = title
        container[Columns.favorite] = isFavorite
        container[Columns.latitude] = coordinate.latitude
        container[Columns.longitude] = coordinate.longitude
    }

    /// Update record ID after a successful insertion
    override func didInsert(with rowID: Int64, for column: String?) {
        id = rowID
    }
}

Record Comparison

Records that adopt the EncodableRecord protocol can compare against other records, or against previous versions of themselves.

This helps avoiding costly UPDATE statements when a record has not been edited.

The updateChanges Methods

The updateChanges methods perform a database update of the changed columns only (and does nothing if record has no change).

  • updateChanges(_:from:)

    This method lets you compare two records:

    if let oldPlayer = try Player.fetchOne(db, id: 42) {
        var newPlayer = oldPlayer
        newPlayer.score = 100
        if try newPlayer.updateChanges(db, from: oldPlayer) {
            print("player was modified, and updated in the database")
        } else {
            print("player was not modified, and database was not hit")
        }
    }
    
  • updateChanges(_:with:)

    This method lets you update a record in place:

    if var player = try Player.fetchOne(db, id: 42) {
        let modified = try player.updateChanges(db) {
            $0.score = 100
        }
        if modified {
            print("player was modified, and updated in the database")
        } else {
            print("player was not modified, and database was not hit")
        }
    }
    
  • updateChanges(_:) (Record class only)

    Instances of the Record class are able to compare against themselves, and know if they have changes that have not been saved since the last fetch or saving:

    // Record class only
    if let player = try Player.fetchOne(db, id: 42) {
        player.score = 100
        if try player.updateChanges(db) {
            print("player was modified, and updated in the database")
        } else {
            print("player was not modified, and database was not hit")
        }
    }
    

The databaseEquals Method

This method returns whether two records have the same database representation:

let oldPlayer: Player = ...
var newPlayer: Player = ...
if newPlayer.databaseEquals(oldPlayer) == false {
    try newPlayer.save(db)
}

:point_up: Note: The comparison is performed on the database representation of records. As long as your record type adopts the EncodableRecord protocol, you don’t need to care about Equatable.

The databaseChanges and hasDatabaseChanges Methods

databaseChanges(from:) returns a dictionary of differences between two records:

let oldPlayer = Player(id: 1, name: "Arthur", score: 100)
let newPlayer = Player(id: 1, name: "Arthur", score: 1000)
for (column, oldValue) in newPlayer.databaseChanges(from: oldPlayer) {
    print("\(column) was \(oldValue)")
}
// prints "score was 100"

The Record class is able to compare against itself:

// Record class only
let player = Player(id: 1, name: "Arthur", score: 100)
try player.insert(db)
player.score = 1000
for (column, oldValue) in player.databaseChanges {
    print("\(column) was \(oldValue)")
}
// prints "score was 100"

Record instances also have a hasDatabaseChanges property:

// Record class only
player.score = 1000
if player.hasDatabaseChanges {
    try player.save(db)
}

Record.hasDatabaseChanges is false after a Record instance has been fetched or saved into the database. Subsequent modifications may set it, or not: hasDatabaseChanges is based on value comparison. Setting a property to the same value does not set the changed flag:

let player = Player(name: "Barbara", score: 750)
player.hasDatabaseChanges // true

try player.insert(db)
player.hasDatabaseChanges // false

player.name = "Barbara"
player.hasDatabaseChanges // false

player.score = 1000
player.hasDatabaseChanges // true
player.databaseChanges    // ["score": 750]

For an efficient algorithm which synchronizes the content of a database table with a JSON payload, check groue/SortedDifference.

Record Customization Options

GRDB records come with many default behaviors, that are designed to fit most situations. Many of those defaults can be customized for your specific needs:

Codable Records have a few extra options:

Conflict Resolution

Insertions and updates can create conflicts: for example, a query may attempt to insert a duplicate row that violates a unique index.

Those conflicts normally end with an error. Yet SQLite let you alter the default behavior, and handle conflicts with specific policies. For example, the INSERT OR REPLACE statement handles conflicts with the “replace” policy which replaces the conflicting row instead of throwing an error.

The five different policies are: abort (the default), replace, rollback, fail, and ignore.

SQLite let you specify conflict policies at two different places:

  • In the definition of the database table:

    // CREATE TABLE player (
    //     id INTEGER PRIMARY KEY AUTOINCREMENT,
    //     email TEXT UNIQUE ON CONFLICT REPLACE
    // )
    try db.create(table: "player") { t in
        t.autoIncrementedPrimaryKey("id")
        t.column("email", .text).unique(onConflict: .replace) // <--
    }
    
    // Despite the unique index on email, both inserts succeed.
    // The second insert replaces the first row:
    try db.execute(sql: "INSERT INTO player (email) VALUES (?)", arguments: ["arthur@example.com"])
    try db.execute(sql: "INSERT INTO player (email) VALUES (?)", arguments: ["arthur@example.com"])
    
  • In each modification query:

    // CREATE TABLE player (
    //     id INTEGER PRIMARY KEY AUTOINCREMENT,
    //     email TEXT UNIQUE
    // )
    try db.create(table: "player") { t in
        t.autoIncrementedPrimaryKey("id")
        t.column("email", .text).unique()
    }
    
    // Again, despite the unique index on email, both inserts succeed.
    try db.execute(sql: "INSERT OR REPLACE INTO player (email) VALUES (?)", arguments: ["arthur@example.com"])
    try db.execute(sql: "INSERT OR REPLACE INTO player (email) VALUES (?)", arguments: ["arthur@example.com"])
    

When you want to handle conflicts at the query level, specify a custom persistenceConflictPolicy in your type that adopts the PersistableRecord protocol. It will alter the INSERT and UPDATE queries run by the insert, update and save persistence methods:

protocol MutablePersistableRecord {
    /// The policy that handles SQLite conflicts when records are
    /// inserted or updated.
    ///
    /// This property is optional: its default value uses the ABORT
    /// policy for both insertions and updates, so that GRDB generate
    /// regular INSERT and UPDATE queries.
    static var persistenceConflictPolicy: PersistenceConflictPolicy { get }
}

struct Player : MutablePersistableRecord {
    static let persistenceConflictPolicy = PersistenceConflictPolicy(
        insert: .replace,
        update: .replace)
}

// INSERT OR REPLACE INTO player (...) VALUES (...)
try player.insert(db)

:point_up: Note: the ignore policy does not play well at all with the didInsert method which notifies the rowID of inserted records. Choose your poison:

  • if you specify the ignore policy in the database table definition, don’t implement the didInsert method: it will be called with some random id in case of failed insert.
  • if you specify the ignore policy at the query level, the didInsert method is never called.

:point_up: Note: The replace policy may have to delete rows so that inserts and updates can succeed. Those deletions are not reported to transaction observers (this might change in a future release of SQLite).

The Implicit RowID Primary Key

All SQLite tables have a primary key. Even when the primary key is not explicit:

// No explicit primary key
try db.create(table: "event") { t in
    t.column("message", .text)
    t.column("date", .datetime)
}

// No way to define an explicit primary key
try db.create(virtualTable: "book", using: FTS4()) { t in
    t.column("title")
    t.column("author")
    t.column("body")
}

The implicit primary key is stored in the hidden column rowid. Hidden means that SELECT * does not select it, and yet it can be selected and queried: SELECT *, rowid ... WHERE rowid = 1.

Some GRDB methods will automatically use this hidden column when a table has no explicit primary key:

// SELECT * FROM event WHERE rowid = 1
let event = try Event.fetchOne(db, id: 1)

// DELETE FROM book WHERE rowid = 1
try Book.deleteOne(db, id: 1)

Exposing the RowID Column

By default, a record type that wraps a table without any explicit primary key doesn’t know about the hidden rowid column.

Without primary key, records don’t have any identity, and the persistence method can behave in undesired fashion: update() throws errors, save() always performs insertions and may break constraints, exists() is always false.

When SQLite won’t let you provide an explicit primary key (as in full-text tables, for example), you may want to make your record type fully aware of the hidden rowid column:

  1. Have the databaseSelection static property (from the TableRecord protocol) return the hidden rowid column:

    struct Event : TableRecord {
        static let databaseSelection: [SQLSelectable] = [AllColumns(), Column.rowID]
    }
    
    // When you subclass Record, you need an override:
    class Book : Record {
        override class var databaseSelection: [SQLSelectable] {
            [AllColums(), Column.rowID]
        }
    }
    

    GRDB will then select the rowid column by default:

    // SELECT *, rowid FROM event
    let events = try Event.fetchAll(db)
    
  2. Have init(row:) from the FetchableRecord protocol consume the “rowid” column:

    struct Event : FetchableRecord {
        var id: Int64?
    
        init(row: Row) {
            id = row[Column.rowID] // or `row[.rowID]` with Swift 5.5+
        }
    }
    

    Your fetched records will then know their ids:

    let event = try Event.fetchOne(db)!
    event.id // some value
    
  3. Encode the rowid in encode(to:), and keep it in the didInsert(with:for:) method (both from the PersistableRecord and MutablePersistableRecord protocols):

    struct Event : MutablePersistableRecord {
        var id: Int64?
    
        func encode(to container: inout PersistenceContainer) {
            container[Column.rowID] = id // or `container[.rowID]` with Swift 5.5+
            container["message"] = message
            container["date"] = date
        }
    
        // Update auto-incremented id upon successful insertion
        mutating func didInsert(with rowID: Int64, for column: String?) {
            id = rowID
        }
    }
    

    You will then be able to track your record ids, update them, or check for their existence:

    let event = Event(message: "foo", date: Date())
    
    // Insertion sets the record id:
    try event.insert(db)
    event.id // some value
    
    // Record can be updated:
    event.message = "bar"
    try event.update(db)
    
    // Record knows if it exists:
    event.exists(db) // true
    

Beyond FetchableRecord

Some GRDB users eventually discover that the FetchableRecord protocol does not fit all situations. Use cases that are not well handled by FetchableRecord include:

  • Your application needs polymorphic row decoding: it decodes some type or another, depending on the values contained in a database row.

  • Your application needs to decode rows with a context: each decoded value should be initialized with some extra value that does not come from the database.

  • Your application needs a record type that supports untrusted databases, and may fail at decoding database rows (throw an error when a row contains invalid values).

Since those use cases are not well handled by FetchableRecord, don’t try to implement them on top of this protocol: you’ll just fight the framework.

Instead, please have a look at the CustomizedDecodingOfDatabaseRows playground. You’ll run some sample code, and learn how to escape FetchableRecord when you need. And remember that leaving FetchableRecord will not deprive you of query interface requests and generally all SQL generation features of the TableRecord and PersistableRecord protocols.

Examples of Record Definitions

We will show below how to declare a record type for the following database table:

try dbQueue.write { db in
    try db.create(table: "place") { t in
        t.autoIncrementedPrimaryKey("id")
        t.column("title", .text).notNull()
        t.column("isFavorite", .boolean).notNull().defaults(to: false)
        t.column("longitude", .double).notNull()
        t.column("latitude", .double).notNull()
    }
}

Each one of the three examples below is correct. You will pick one or the other depending on your personal preferences and the requirements of your application:

Define a Codable struct, and adopt the record protocols you need This is the shortest way to define a record type. See the [Record Protocols Overview](#record-protocols-overview), and [Codable Records] for more information. “`swift struct Place: Codable { var id: Int64? var title: String var isFavorite: Bool private var latitude: CLLocationDegrees private var longitude: CLLocationDegrees var coordinate: CLLocationCoordinate2D { get { CLLocationCoordinate2D( latitude: latitude, longitude: longitude) } set { latitude = newValue.latitude longitude = newValue.longitude } } } // SQL generation extension Place: TableRecord { /// The table columns enum Columns { static let id = Column(CodingKeys.id) static let title = Column(CodingKeys.title) static let isFavorite = Column(CodingKeys.isFavorite) static let latitude = Column(CodingKeys.latitude) static let longitude = Column(CodingKeys.longitude) } } // Fetching methods extension Place: FetchableRecord { } // Persistence methods extension Place: MutablePersistableRecord { // Update auto-incremented id upon successful insertion mutating func didInsert(with rowID: Int64, for column: String?) { id = rowID } } ”`
Define a plain struct, and adopt the record protocols you need See the [Record Protocols Overview](#record-protocols-overview) for more information. “`swift struct Place { var id: Int64? var title: String var isFavorite: Bool var coordinate: CLLocationCoordinate2D } // SQL generation extension Place: TableRecord { /// The table columns enum Columns: String, ColumnExpression { case id, title, isFavorite, latitude, longitude } } // Fetching methods extension Place: FetchableRecord { /// Creates a record from a database row init(row: Row) { id = row[Columns.id] title = row[Columns.title] isFavorite = row[Columns.isFavorite] coordinate = CLLocationCoordinate2D( latitude: row[Columns.latitude], longitude: row[Columns.longitude]) } } // Persistence methods extension Place: MutablePersistableRecord { /// The values persisted in the database func encode(to container: inout PersistenceContainer) { container[Columns.id] = id container[Columns.title] = title container[Columns.isFavorite] = isFavorite container[Columns.latitude] = coordinate.latitude container[Columns.longitude] = coordinate.longitude } // Update auto-incremented id upon successful insertion mutating func didInsert(with rowID: Int64, for column: String?) { id = rowID } } ”`
Define a plain struct optimized for fetching performance This struct derives its persistence methods from the standard Encodable protocol (see [Codable Records]), but performs optimized row decoding by accessing database columns with numeric indexes. See the [Record Protocols Overview](#record-protocols-overview) for more information. “`swift struct Place: Encodable { var id: Int64? var title: String var isFavorite: Bool private var latitude: CLLocationDegrees private var longitude: CLLocationDegrees var coordinate: CLLocationCoordinate2D { get { CLLocationCoordinate2D( latitude: latitude, longitude: longitude) } set { latitude = newValue.latitude longitude = newValue.longitude } } } // SQL generation extension Place: TableRecord { /// The table columns enum Columns { static let id = Column(CodingKeys.id) static let title = Column(CodingKeys.title) static let isFavorite = Column(CodingKeys.isFavorite) static let latitude = Column(CodingKeys.latitude) static let longitude = Column(CodingKeys.longitude) } /// Arrange the selected columns and lock their order static let databaseSelection: [SQLSelectable] = [ Columns.id, Columns.title, Columns.favorite, Columns.latitude, Columns.longitude] } // Fetching methods extension Place: FetchableRecord { /// Creates a record from a database row init(row: Row) { // For high performance, use numeric indexes that match the // order of Place.databaseSelection id = row[0] title = row[1] isFavorite = row[2] coordinate = CLLocationCoordinate2D( latitude: row[3], longitude: row[4]) } } // Persistence methods extension Place: MutablePersistableRecord { // Update auto-incremented id upon successful insertion mutating func didInsert(with rowID: Int64, for column: String?) { id = rowID } } ”`
Subclass the Record class See the [Record class](#record-class) for more information. “`swift class Place: Record { var id: Int64? var title: String var isFavorite: Bool var coordinate: CLLocationCoordinate2D init(id: Int64?, title: String, isFavorite: Bool, coordinate: CLLocationCoordinate2D) { self.id = id self.title = title self.isFavorite = isFavorite self.coordinate = coordinate super.init() } /// The table name override class var databaseTableName: String { "place” } /// The table columns enum Columns: String, ColumnExpression { case id, title, isFavorite, latitude, longitude } /// Creates a record from a database row required init(row: Row) { id = row[Columns.id] title = row[Columns.title] isFavorite = row[Columns.isFavorite] coordinate = CLLocationCoordinate2D( latitude: row[Columns.latitude], longitude: row[Columns.longitude]) super.init(row: row) } /// The values persisted in the database override func encode(to container: inout PersistenceContainer) { container[Columns.id] = id container[Columns.title] = title container[Columns.isFavorite] = isFavorite container[Columns.latitude] = coordinate.latitude container[Columns.longitude] = coordinate.longitude } // Update auto-incremented id upon successful insertion override func didInsert(with rowID: Int64, for column: String?) { id = rowID } } “`

List of Record Methods

This is the list of record methods, along with their required protocols. The Record class adopts all these protocols, and adds a few extra methods.

Method Protocols Notes
Core Methods
init(row:) FetchableRecord
Type.databaseTableName TableRecord
Type.databaseSelection TableRecord *
Type.persistenceConflictPolicy PersistableRecord *
record.encode(to:) EncodableRecord
record.didInsert(with:for:) PersistableRecord
Insert and Update Records
record.insert(db) PersistableRecord
record.inserted(db) PersistableRecord
record.save(db) PersistableRecord
record.saved(db) PersistableRecord
record.update(db) PersistableRecord
record.update(db, columns:...) PersistableRecord
record.updateChanges(db, from:...) PersistableRecord *
record.updateChanges(db) { ... } PersistableRecord *
record.updateChanges(db) Record *
Type.updateAll(db, ...) TableRecord
Type.filter(...).updateAll(db, ...) TableRecord ²
Delete Records
record.delete(db) PersistableRecord
Type.deleteOne(db, key:...) TableRecord ¹
Type.deleteOne(db, id:...) TableRecord & Identifiable ¹
Type.deleteAll(db) TableRecord
Type.deleteAll(db, keys:...) TableRecord ¹
Type.deleteAll(db, ids:...) TableRecord & Identifiable ¹
Type.filter(...).deleteAll(db) TableRecord ²
Check Record Existence
record.exists(db) PersistableRecord
Type.exists(db, key: ...) TableRecord ¹
Type.exists(db, id: ...) TableRecord & Identifiable ¹
Type.filter(...).isEmpty(db) TableRecord ²
Convert Record to Dictionary
record.databaseDictionary EncodableRecord
Count Records
Type.fetchCount(db) TableRecord
Type.filter(...).fetchCount(db) TableRecord ²
Fetch Record Cursors
Type.fetchCursor(db) FetchableRecord & TableRecord
Type.fetchCursor(db, keys:...) FetchableRecord & TableRecord ¹
Type.fetchCursor(db, ids:...) FetchableRecord & TableRecord & Identifiable ¹
Type.fetchCursor(db, sql: sql) FetchableRecord ³
Type.fetchCursor(statement) FetchableRecord
Type.filter(...).fetchCursor(db) FetchableRecord & TableRecord ²
Fetch Record Arrays
Type.fetchAll(db) FetchableRecord & TableRecord
Type.fetchAll(db, keys:...) FetchableRecord & TableRecord ¹
Type.fetchAll(db, ids:...) FetchableRecord & TableRecord & Identifiable ¹
Type.fetchAll(db, sql: sql) FetchableRecord ³
Type.fetchAll(statement) FetchableRecord
Type.filter(...).fetchAll(db) FetchableRecord & TableRecord ²
Fetch Record Sets
Type.fetchSet(db) FetchableRecord & TableRecord
Type.fetchSet(db, keys:...) FetchableRecord & TableRecord ¹
Type.fetchSet(db, ids:...) FetchableRecord & TableRecord & Identifiable ¹
Type.fetchSet(db, sql: sql) FetchableRecord ³
Type.fetchSet(statement) FetchableRecord
Type.filter(...).fetchSet(db) FetchableRecord & TableRecord ²
Fetch Individual Records
Type.fetchOne(db) FetchableRecord & TableRecord
Type.fetchOne(db, key:...) FetchableRecord & TableRecord ¹
Type.fetchOne(db, id:...) FetchableRecord & TableRecord & Identifiable ¹
Type.fetchOne(db, sql: sql) FetchableRecord ³
Type.fetchOne(statement) FetchableRecord
Type.filter(...).fetchOne(db) FetchableRecord & TableRecord ²
Codable Records
Type.databaseDecodingUserInfo FetchableRecord *
Type.databaseJSONDecoder(for:) FetchableRecord *
Type.databaseDateDecodingStrategy FetchableRecord *
Type.databaseEncodingUserInfo EncodableRecord *
Type.databaseJSONEncoder(for:) EncodableRecord *
Type.databaseDateEncodingStrategy EncodableRecord *
Type.databaseUUIDEncodingStrategy EncodableRecord *
Define Associations
Type.belongsTo(...) TableRecord *
Type.hasMany(...) TableRecord *
Type.hasOne(...) TableRecord *
Type.hasManyThrough(...) TableRecord *
Type.hasOneThrough(...) TableRecord *
Building Query Interface Requests
record.request(for:...) TableRecord & EncodableRecord *
Type.all() TableRecord ²
Type.none() TableRecord ²
Type.select(...) TableRecord ²
Type.select(..., as:...) TableRecord ²
Type.selectID() TableRecord & Identifiable *
Type.annotated(with:...) TableRecord ²
Type.filter(...) TableRecord ²
Type.filter(id:) TableRecord & Identifiable *
Type.filter(ids:) TableRecord & Identifiable *
Type.matching(...) TableRecord ²
Type.including(all:) TableRecord ²
Type.including(optional:) TableRecord ²
Type.including(required:) TableRecord ²
Type.joining(optional:) TableRecord ²
Type.joining(required:) TableRecord ²
Type.group(...) TableRecord ²
Type.groupByPrimaryKey() TableRecord ²
Type.having(...) TableRecord ²
Type.order(...) TableRecord ²
Type.orderByPrimaryKey() TableRecord ²
Type.limit(...) TableRecord ²
Type.with(...) TableRecord ²
Record Comparison
record.databaseEquals(...) EncodableRecord
record.databaseChanges(from:...) EncodableRecord
record.updateChanges(db, from:...) PersistableRecord
record.updateChanges(db) { ... } PersistableRecord
record.hasDatabaseChanges Record
record.databaseChanges Record
record.updateChanges(db) Record

¹ All unique keys are supported: primary keys (single-column, composite, implicit RowID) and unique indexes:

try Player.fetchOne(db, id: 1)                                // Player?
try Player.fetchOne(db, key: ["email": "arthur@example.com"]) // Player?
try Country.fetchAll(db, keys: ["FR", "US"])                  // [Country]

² See Fetch Requests:

let request = Player.filter(emailColumn != nil).order(nameColumn)
let players = try request.fetchAll(db)  // [Player]
let count = try request.fetchCount(db)  // Int

³ See SQL queries:

let player = try Player.fetchOne(db, sql: "SELECT * FROM player WHERE id = ?", arguments: [1]) // Player?

See Prepared Statements:

let statement = try db.makeStatement(sql: "SELECT * FROM player WHERE id = ?")
let player = try Player.fetchOne(statement, arguments: [1])  // Player?

The Query Interface

The query interface lets you write pure Swift instead of SQL:

try dbQueue.write { db in
    // Update database schema
    try db.create(table: "wine") { t in ... }

    // Fetch records
    let wines = try Wine
        .filter(originColumn == "Burgundy")
        .order(priceColumn)
        .fetchAll(db)

    // Count
    let count = try Wine
        .filter(colorColumn == Color.red)
        .fetchCount(db)

    // Update
    try Wine
        .filter(originColumn == "Burgundy")
        .updateAll(db, priceColumn *= 0.75)

    // Delete
    try Wine
        .filter(corkedColumn == true)
        .deleteAll(db)
}

You need to open a database connection before you can query the database.

Please bear in mind that the query interface can not generate all possible SQL queries. You may also prefer writing SQL, and this is just OK. From little snippets to full queries, your SQL skills are welcome:

try dbQueue.write { db in
    // Update database schema (with SQL)
    try db.execute(sql: "CREATE TABLE wine (...)")

    // Fetch records (with SQL)
    let wines = try Wine.fetchAll(db,
        sql: "SELECT * FROM wine WHERE origin = ? ORDER BY price",
        arguments: ["Burgundy"])

    // Count (with an SQL snippet)
    let count = try Wine
        .filter(sql: "color = ?", arguments: [Color.red])
        .fetchCount(db)

    // Update (with SQL)
    try db.execute(sql: "UPDATE wine SET price = price * 0.75 WHERE origin = 'Burgundy'")

    // Delete (with SQL)
    try db.execute(sql: "DELETE FROM wine WHERE corked")
}

So don’t miss the SQL API.

:point_up: Note: the generated SQL may change between GRDB releases, without notice: don’t have your application rely on any specific SQL output.

Database Schema

Once granted with a database connection, you can setup your database schema without writing SQL:

Create Tables

// CREATE TABLE place (
//   id INTEGER PRIMARY KEY AUTOINCREMENT,
//   title TEXT,
//   favorite BOOLEAN NOT NULL DEFAULT 0,
//   latitude DOUBLE NOT NULL,
//   longitude DOUBLE NOT NULL
// )
try db.create(table: "place") { t in
    t.autoIncrementedPrimaryKey("id")
    t.column("title", .text)
    t.column("favorite", .boolean).notNull().defaults(to: false)
    t.column("longitude", .double).notNull()
    t.column("latitude", .double).notNull()
}

The create(table:) method covers nearly all SQLite table creation features. For virtual tables, see Full-Text Search, or use raw SQL.

SQLite itself has many reference documents about table creation: CREATE TABLE, Datatypes In SQLite Version 3, SQLite Foreign Key Support, ON CONFLICT, The WITHOUT ROWID Optimization.

Configure table creation:

// CREATE TABLE example ( ... )
try db.create(table: "example") { t in ... }

// CREATE TEMPORARY TABLE example IF NOT EXISTS (
try db.create(table: "example", options: [.temporary, .ifNotExists]) { t in ... }

:bulb: Tip: database table names should be singular, and camelCased. Make them look like Swift identifiers: place, country, postalAddress, httpRequest.

This will help you using Associations when you need them. Database table names that follow another naming convention are totally OK, but you will need to perform extra configuration.

:point_up: Note: WITHOUT ROWID tables can not be tracked with Database Observation tools.

Add regular columns with their name and eventual type (text, integer, double, real, numeric, boolean, blob, date, datetime and any) - see SQLite data types:

// CREATE TABLE example (
//   a,
//   name TEXT,
//   creationDate DATETIME,
try db.create(table: "example") { t in
    t.column("a")
    t.column("name", .text)
    t.column("creationDate", .datetime)

Define not null columns, and set default values:

    // email TEXT NOT NULL,
    t.column("email", .text).notNull()

    // name TEXT NOT NULL DEFAULT 'Anonymous',
    t.column("name", .text).notNull().defaults(to: "Anonymous")

Use an individual column as primary, unique, or foreign key. When defining a foreign key, the referenced column is the primary key of the referenced table (unless you specify otherwise):

    // id INTEGER PRIMARY KEY AUTOINCREMENT,
    t.autoIncrementedPrimaryKey("id")

    // uuid TEXT PRIMARY KEY,
    t.column("uuid", .text).primaryKey()

    // email TEXT UNIQUE,
    t.column("email", .text).unique()

    // countryCode TEXT REFERENCES country(code) ON DELETE CASCADE,
    t.column("countryCode", .text).references("country", onDelete: .cascade)

:bulb: Tip: when you need an integer primary key that automatically generates unique values, it is highly recommended that you use the autoIncrementedPrimaryKey method:

try db.create(table: "example") { t in
    t.autoIncrementedPrimaryKey("id")
    ...
}

The reason for this recommendation is that auto-incremented primary keys prevent the reuse of ids. This prevents your app or database observation tools to think that a row was updated, when it was actually deleted, then replaced. Depending on your application needs, this may be acceptable. But usually it is not.

Create an index on the column:

    t.column("score", .integer).indexed()

For extra index options, see Create Indexes below.

Perform integrity checks on individual columns, and SQLite will only let conforming rows in. In the example below, the $0 closure variable is a column which lets you build any SQL expression.

    // name TEXT CHECK (LENGTH(name) > 0)
    // score INTEGER CHECK (score > 0)
    t.column("name", .text).check { length($0) > 0 }
    t.column("score", .integer).check(sql: "score > 0")

Columns can also be defined with a raw sql String, or an SQL literal in which you can safely embed raw values without any risk of syntax errors or SQL injection:

    t.column(sql: "name TEXT")

    let defaultName: String = ...
    t.column(literal: "name TEXT DEFAULT \(defaultName)")

Other table constraints can involve several columns:

    // PRIMARY KEY (a, b),
    t.primaryKey(["a", "b"])

    // UNIQUE (a, b) ON CONFLICT REPLACE,
    t.uniqueKey(["a", "b"], onConflict: .replace)

    // FOREIGN KEY (a, b) REFERENCES parents(c, d),
    t.foreignKey(["a", "b"], references: "parents")

    // CHECK (a + b < 10),
    t.check(Column("a") + Column("b") < 10)

    // CHECK (a + b < 10)
    t.check(sql: "a + b < 10")

    // Raw SQL constraints
    t.constraint(sql: "CHECK (a + b < 10)")
    t.constraint(literal: "CHECK (a + b < \(10))")

Generated columns are available with a custom SQLite build:

    t.column("totalScore", .integer).generatedAs(sql: "score + bonus")
    t.column("totalScore", .integer).generatedAs(Column("score") + Column("bonus"))
}

Modify Tables

SQLite lets you modify existing tables:

// ALTER TABLE referer RENAME TO referrer
try db.rename(table: "referer", to: "referrer")

// ALTER TABLE player ADD COLUMN hasBonus BOOLEAN
// ALTER TABLE player RENAME COLUMN url TO homeURL
// ALTER TABLE player DROP COLUMN score
try db.alter(table: "player") { t in
    t.add(column: "hasBonus", .boolean)
    t.rename(column: "url", to: "homeURL") // SQLite 3.25+
    t.drop(column: "score") // SQLite 3.35+
}

:point_up: Note: SQLite restricts the possible table alterations, and may require you to recreate dependent triggers or views. See the documentation of the ALTER TABLE for details. See Advanced Database Schema Changes for a way to lift restrictions.

Drop Tables

Drop tables with the drop(table:) method:

try db.drop(table: "obsolete")

Create Indexes

Create indexes with the create(index:) method:

// CREATE INDEX byName ON users(lastName, firstName)
try db.create(index: "byName", on: "users", columns: ["lastName, "firstName"])

// CREATE UNIQUE INDEX byEmail IF NOT EXISTS ON users(email)
try db.create(index: "byEmail", on: "users", columns: ["email"], options: [.unique, .ifNotExists])

Relevant SQLite documentation:

Requests

The query interface requests let you fetch values from the database:

let request = Player.filter(emailColumn != nil).order(nameColumn)
let players = try request.fetchAll(db)  // [Player]
let count = try request.fetchCount(db)  // Int

Query interface requests usually start from a type that adopts the TableRecord protocol, such as a Record subclass (see Records):

class Player: Record { ... }

// The request for all players:
let request = Player.all()
let players = try request.fetchAll(db) // [Player]

When you can not use a record type, use Table:

// The request for all rows from the player table:
let table = Table("player")
let request = table.all()
let rows = try request.fetchAll(db)    // [Row]

// The request for all players from the player table:
let table = Table<Player>("player")
let request = table.all()
let players = try request.fetchAll(db) // [Player]

:point_up: Note: all examples in the documentation below use a record type, but you can always substitute a Table instead.

Next, declare the table columns that you want to use for filtering, or sorting:

let idColumn = Column("id")
let nameColumn = Column("name")

You can also declare column enums, if you prefer:

// Columns.id and Columns.name can be used just as
// idColumn and nameColumn declared above.
enum Columns: String, ColumnExpression {
    case id
    case name
}

You can now build requests with the following methods: all, none, select, distinct, filter, matching, group, having, order, reversed, limit, joining, including, with. All those methods return another request, which you can further refine by applying another method: Player.select(...).filter(...).order(...).

  • all(), none(): the requests for all rows, or no row.

    // SELECT * FROM player
    Player.all()
    

    By default, all columns are selected. See Columns Selected by a Request.

  • select(...) and select(..., as:) define the selected columns. See Columns Selected by a Request.

    // SELECT name FROM player
    Player.select(nameColumn, as: String.self)
    
  • selectID() is available on Identifiable Records. It supports all tables that have a single-column primary key:

    // SELECT id FROM player
    Player.selectID()
    
    // SELECT id FROM player WHERE name IS NOT NULL
    Player.filter(nameColumn != nil).selectID()
    
  • annotated(with: expression...) extends the selection.

    // SELECT *, (score + bonus) AS total FROM player
    Player.annotated(with: (scoreColumn + bonusColumn).forKey("total"))
    
  • annotated(with: aggregate) extends the selection with association aggregates.

    // SELECT team.*, COUNT(DISTINCT player.id) AS playerCount
    // FROM team
    // LEFT JOIN player ON player.teamId = team.id
    // GROUP BY team.id
    Team.annotated(with: Team.players.count)
    
  • annotated(withRequired: association) and annotated(withOptional: association) extends the selection with Associations.

    // SELECT player.*, team.color
    // FROM player
    // JOIN team ON team.id = player.teamId
    Player.annotated(withRequired: Player.team.select(colorColumn))
    
  • distinct() performs uniquing.

    // SELECT DISTINCT name FROM player
    Player.select(nameColumn, as: String.self).distinct()
    
  • filter(expression) applies conditions.

    // SELECT * FROM player WHERE id IN (1, 2, 3)
    Player.filter([1,2,3].contains(idColumn))
    
    // SELECT * FROM player WHERE (name IS NOT NULL) AND (height > 1.75)
    Player.filter(nameColumn != nil && heightColumn > 1.75)
    
  • filter(id:) and filter(ids:) are type-safe methods available on Identifiable Records:

    // SELECT * FROM player WHERE id = 1
    Player.filter(id: 1)
    
    // SELECT * FROM country WHERE isoCode IN ('FR', 'US')
    Country.filter(ids: ["FR", "US"])
    
  • filter(key:) and filter(keys:) apply conditions on primary and unique keys:

    // SELECT * FROM player WHERE id = 1
    Player.filter(key: 1)
    
    // SELECT * FROM country WHERE isoCode IN ('FR', 'US')
    Country.filter(keys: ["FR", "US"])
    
    // SELECT * FROM citizenship WHERE citizenId = 1 AND countryCode = 'FR'
    Citizenship.filter(key: ["citizenId": 1, "countryCode": "FR"])
    
    // SELECT * FROM player WHERE email = 'arthur@example.com'
    Player.filter(key: ["email": "arthur@example.com"])
    
  • matching(pattern) performs full-text search.

    // SELECT * FROM document WHERE document MATCH 'sqlite database'
    let pattern = FTS3Pattern(matchingAllTokensIn: "SQLite database")
    Document.matching(pattern)
    

    When the pattern is nil, no row will match.

  • group(expression, ...) groups rows.

    // SELECT name, MAX(score) FROM player GROUP BY name
    Player
        .select(nameColumn, max(scoreColumn))
        .group(nameColumn)
    
  • having(expression) applies conditions on grouped rows.

    // SELECT team, MAX(score) FROM player GROUP BY team HAVING MIN(score) >= 1000
    Player
        .select(teamColumn, max(scoreColumn))
        .group(teamColumn)
        .having(min(scoreColumn) >= 1000)
    
  • having(aggregate) applies conditions on grouped rows, according to an association aggregate.

    // SELECT team.*
    // FROM team
    // LEFT JOIN player ON player.teamId = team.id
    // GROUP BY team.id
    // HAVING COUNT(DISTINCT player.id) >= 5
    Team.having(Team.players.count >= 5)
    
  • order(ordering, ...) sorts.

    // SELECT * FROM player ORDER BY name
    Player.order(nameColumn)
    
    // SELECT * FROM player ORDER BY score DESC, name
    Player.order(scoreColumn.desc, nameColumn)
    

    SQLite considers NULL values to be smaller than any other values for sorting purposes. Hence, NULLs naturally appear at the beginning of an ascending ordering and at the end of a descending ordering. With a custom SQLite build, this can be changed using .ascNullsLast and .descNullsFirst:

    // SELECT * FROM player ORDER BY score ASC NULLS LAST
    Player.order(nameColumn.ascNullsLast)
    

    Each order call clears any previous ordering:

    // SELECT * FROM player ORDER BY name
    Player.order(scoreColumn).order(nameColumn)
    
  • reversed() reverses the eventual orderings.

    // SELECT * FROM player ORDER BY score ASC, name DESC
    Player.order(scoreColumn.desc, nameColumn).reversed()
    

    If no ordering was already specified, this method has no effect:

    // SELECT * FROM player
    Player.all().reversed()
    
  • limit(limit, offset: offset) limits and pages results.

    // SELECT * FROM player LIMIT 5
    Player.limit(5)
    
    // SELECT * FROM player LIMIT 5 OFFSET 10
    Player.limit(5, offset: 10)
    
  • joining(...) and including(...) fetch and join records through Associations.

    // SELECT player.*, team.*
    // FROM player
    // JOIN team ON team.id = player.teamId
    Player.including(required: Player.team)
    
  • with(cte) embeds a common table expression:

    // WITH ... SELECT * FROM player
    let cte = CommonTableExpression(...)
    Player.with(cte)
    
  • Other requests that involve the primary key:

    • orderByPrimaryKey() sorts by primary key.

      // SELECT * FROM player ORDER BY id
      Player.orderByPrimaryKey()
      
      // SELECT * FROM country ORDER BY code
      Country.orderByPrimaryKey()
      
      // SELECT * FROM citizenship ORDER BY citizenId, countryCode
      Citizenship.orderByPrimaryKey()
      
    • groupByPrimaryKey() groups rows by primary key.

You can refine requests by chaining those methods:

// SELECT * FROM player WHERE (email IS NOT NULL) ORDER BY name
Player.order(nameColumn).filter(emailColumn != nil)

The select, order, group, and limit methods ignore and replace previously applied selection, orderings, grouping, and limits. On the opposite, filter, matching, and having methods extend the query:

Player                          // SELECT * FROM player
    .filter(nameColumn != nil)  // WHERE (name IS NOT NULL)
    .filter(emailColumn != nil) //        AND (email IS NOT NULL)
    .order(nameColumn)          // - ignored -
    .reversed()                 // - ignored -
    .order(scoreColumn)         // ORDER BY score
    .limit(20, offset: 40)      // - ignored -
    .limit(10)                  // LIMIT 10

Raw SQL snippets are also accepted, with eventual arguments:

// SELECT DATE(creationDate), COUNT(*) FROM player WHERE name = 'Arthur' GROUP BY date(creationDate)
Player
    .select(sql: "DATE(creationDate), COUNT(*)")
    .filter(sql: "name = ?", arguments: ["Arthur"])
    .group(sql: "DATE(creationDate)")

Columns Selected by a Request

By default, query interface requests select all columns:

// SELECT * FROM player
struct Player: TableRecord { ... }
let request = Player.all()

// SELECT * FROM player
let table = Table("player")
let request = table.all()

The selection can be changed for each individual requests, or in the case of record-based requests, for all requests built from this record type.

The select(...) and select(..., as:) methods change the selection of a single request (see Fetching from Requests for detailed information):

let request = Player.select(max(Column("score")))
let maxScore = try Int.fetchOne(db, request) // Int?

let request = Player.select(max(Column("score")), as: Int.self)
let maxScore = try request.fetchOne(db)      // Int?

The default selection for a record type is controlled by the databaseSelection property:

struct RestrictedPlayer : TableRecord {
    static let databaseTableName = "player"
    static let databaseSelection: [SQLSelectable] = [Column("id"), Column("name")]
}

struct ExtendedPlayer : TableRecord {
    static let databaseTableName = "player"
    static let databaseSelection: [SQLSelectable] = [AllColumns(), Column.rowID]
}

// SELECT id, name FROM player
let request = RestrictedPlayer.all()

// SELECT *, rowid FROM player
let request = ExtendedPlayer.all()

:point_up: Note: make sure the databaseSelection property is explicitly declared as [SQLSelectable]. If it is not, the Swift compiler may silently miss the protocol requirement, resulting in sticky SELECT * requests. To verify your setup, see the How do I print a request as SQL? FAQ.

Expressions

Feed requests with SQL expressions built from your Swift code:

SQL Operators

GRDB comes with a Swift version of many SQLite built-in operators, listed below. But not all: see Embedding SQL in Query Interface Requests for a way to add support for missing SQL operators.

  • =, <>, <, <=, >, >=, IS, IS NOT

    Comparison operators are based on the Swift operators ==, !=, ===, !==, <, <=, >, >=:

    // SELECT * FROM player WHERE (name = 'Arthur')
    Player.filter(nameColumn == "Arthur")
    
    // SELECT * FROM player WHERE (name IS NULL)
    Player.filter(nameColumn == nil)
    
    // SELECT * FROM player WHERE (score IS 1000)
    Player.filter(scoreColumn === 1000)
    
    // SELECT * FROM rectangle WHERE width < height
    Rectangle.filter(widthColumn < heightColumn)
    

    Subqueries are supported:

    // SELECT * FROM player WHERE score = (SELECT max(score) FROM player)
    let maximumScore = Player.select(max(scoreColumn))
    Player.filter(scoreColumn == maximumScore)
    
    // SELECT * FROM player WHERE score = (SELECT max(score) FROM player)
    let maximumScore = SQLRequest("SELECT max(score) FROM player")
    Player.filter(scoreColumn == maximumScore)
    

    :point_up: Note: SQLite string comparison, by default, is case-sensitive and not Unicode-aware. See string comparison if you need more control.

  • *, /, +, -

    SQLite arithmetic operators are derived from their Swift equivalent:

    // SELECT ((temperature * 1.8) + 32) AS fahrenheit FROM planet
    Planet.select((temperatureColumn * 1.8 + 32).forKey("fahrenheit"))
    

    :point_up: Note: an expression like nameColumn + "rrr" will be interpreted by SQLite as a numerical addition (with funny results), not as a string concatenation. See the concat operator below.

    When you want to join a sequence of expressions with the + or * operator, use joined(operator:):

    // SELECT score + bonus + 1000 FROM player
    let values = [
        scoreColumn,
        bonusColumn,
        1000.databaseValue]
    Player.select(values.joined(operator: .add))
    

    Note in the example above how you concatenate raw values: 1000.databaseValue. A plain 1000 would not compile.

    When the sequence is empty, joined(operator: .add) returns 0, and joined(operator: .multiply) returns 1.

  • ||

    Concatenate several strings:

    // SELECT firstName || ' ' || lastName FROM player
    Player.select([firstNameColumn, " ".databaseValue, lastNameColumn].joined(operator: .concat))
    

    Note in the example above how you concatenate raw strings: " ".databaseValue. A plain " " would not compile.

    When the sequence is empty, joined(operator: .concat) returns the empty string.

  • AND, OR, NOT

    The SQL logical operators are derived from the Swift &&, || and !:

    // SELECT * FROM player WHERE ((NOT verified) OR (score < 1000))
    Player.filter(!verifiedColumn || scoreColumn < 1000)
    

    When you want to join a sequence of expressions with the AND or OR operator, use joined(operator:):

    // SELECT * FROM player WHERE (verified AND (score >= 1000) AND (name IS NOT NULL))
    let conditions = [
        verifiedColumn,
        scoreColumn >= 1000,
        nameColumn != nil]
    Player.filter(conditions.joined(operator: .and))
    

    When the sequence is empty, joined(operator: .and) returns true, and joined(operator: .or) returns false:

    // SELECT * FROM player WHERE 1
    Player.filter([].joined(operator: .and))
    
    // SELECT * FROM player WHERE 0
    Player.filter([].joined(operator: .or))
    
  • BETWEEN, IN, NOT IN

    To check inclusion in a Swift sequence (array, set, range…), call the contains method:

    // SELECT * FROM player WHERE id IN (1, 2, 3)
    Player.filter([1, 2, 3].contains(idColumn))
    
    // SELECT * FROM player WHERE id NOT IN (1, 2, 3)
    Player.filter(![1, 2, 3].contains(idColumn))
    
    // SELECT * FROM player WHERE score BETWEEN 0 AND 1000
    Player.filter((0...1000).contains(scoreColumn))
    
    // SELECT * FROM player WHERE (score >= 0) AND (score < 1000)
    Player.filter((0..<1000).contains(scoreColumn))
    
    // SELECT * FROM player WHERE initial BETWEEN 'A' AND 'N'
    Player.filter(("A"..."N").contains(initialColumn))
    
    // SELECT * FROM player WHERE (initial >= 'A') AND (initial < 'N')
    Player.filter(("A"..<"N").contains(initialColumn))
    

    To check inclusion inside a subquery, call the contains method as well:

    // SELECT * FROM player WHERE id IN (SELECT playerId FROM playerSelection)
    let selectedPlayerIds = PlayerSelection.select(playerIdColumn)
    Player.filter(selectedPlayerIds.contains(idColumn))
    
    // SELECT * FROM player WHERE id IN (SELECT playerId FROM playerSelection)
    let selectedPlayerIds = SQLRequest("SELECT playerId FROM playerSelection")
    Player.filter(selectedPlayerIds.contains(idColumn))
    

    To check inclusion inside a common table expression, call the contains method as well:

    // WITH selectedName AS (...)
    // SELECT * FROM player WHERE name IN selectedName
    let cte = CommonTableExpression(named: "selectedName", ...)
    Player
        .with(cte)
        .filter(cte.contains(nameColumn))
    

    :point_up: Note: SQLite string comparison, by default, is case-sensitive and not Unicode-aware. See string comparison if you need more control.

  • EXISTS, NOT EXISTS

    To check if a subquery would return rows, call the exists method:

    // Teams that have at least one other player
    //
    //  SELECT * FROM team
    //  WHERE EXISTS (SELECT * FROM player WHERE teamID = team.id)
    let teamAlias = TableAlias()
    let player = Player.filter(Column("teamID") == teamAlias[Column("id")])
    let teams = Team.aliased(teamAlias).filter(player.exists())
    
    // Teams that have no player
    //
    //  SELECT * FROM team
    //  WHERE NOT EXISTS (SELECT * FROM player WHERE teamID = team.id)
    let teams = Team.aliased(teamAlias).filter(!player.exists())
    

    In the above example, you use a TableAlias in order to let a subquery refer to a column from another table.

    In the next example, which involves the same table twice, the table alias requires an explicit disambiguation with TableAlias(name:):

    // Players who coach at least one other player
    //
    //  SELECT coach.* FROM player coach
    //  WHERE EXISTS (SELECT * FROM player WHERE coachId = coach.id)
    let coachAlias = TableAlias(name: "coach")
    let coachedPlayer = Player.filter(Column("coachId") == coachAlias[Column("id")])
    let coaches = Player.aliased(coachAlias).filter(coachedPlayer.exists())
    

    Finally, subqueries can also be expressed as SQL, with SQL Interpolation:

    // SELECT coach.* FROM player coach
    // WHERE EXISTS (SELECT * FROM player WHERE coachId = coach.id)
    let coachedPlayer = SQLRequest("SELECT * FROM player WHERE coachId = \(coachAlias[Column("id")])")
    let coaches = Player.aliased(coachAlias).filter(coachedPlayer.exists())
    
  • LIKE

    The SQLite LIKE operator is available as the like method:

    // SELECT * FROM player WHERE (email LIKE '%@example.com')
    Player.filter(emailColumn.like("%@example.com"))
    
    // SELECT * FROM book WHERE (title LIKE '%10\%%' ESCAPE '\')
    Player.filter(emailColumn.like("%10\\%%", escape: "\\"))
    

    :point_up: Note: the SQLite LIKE operator is case-insensitive but not Unicode-aware. For example, the expression 'a' LIKE 'A' is true but 'æ' LIKE 'Æ' is false.

  • MATCH

    The full-text MATCH operator is available through FTS3Pattern (for FTS3 and FTS4 tables) and FTS5Pattern (for FTS5):

    FTS3 and FTS4:

    let pattern = FTS3Pattern(matchingAllTokensIn: "SQLite database")
    
    // SELECT * FROM document WHERE document MATCH 'sqlite database'
    Document.matching(pattern)
    
    // SELECT * FROM document WHERE content MATCH 'sqlite database'
    Document.filter(contentColumn.match(pattern))
    

    FTS5:

    let pattern = FTS5Pattern(matchingAllTokensIn: "SQLite database")
    
    // SELECT * FROM document WHERE document MATCH 'sqlite database'
    Document.matching(pattern)
    
  • AS

    To give an alias to an expression, use the forKey method:

    // SELECT (score + bonus) AS total
    // FROM player
    Player.select((Column("score") + Column("bonus")).forKey("total"))
    

    If you need to refer to this aliased column in another place of the request, use a detached column:

    // SELECT (score + bonus) AS total
    // FROM player 
    // ORDER BY total
    Player
        .select((Column("score") + Column("bonus")).forKey("total"))
        .order(Column("total").detached)
    

    Unlike Column("total"), the detached column Column("total").detached is never associated to the "player” table, so it is always rendered as total in the generated SQL, even when the request involves other tables via an association or a common table expression.

SQL Functions

GRDB comes with a Swift version of many SQLite built-in functions, listed below. But not all: see Embedding SQL in Query Interface Requests for a way to add support for missing SQL functions.

  • ABS, AVG, COUNT, DATETIME, JULIANDAY, LENGTH, MAX, MIN, SUM, TOTAL:

    Those are based on the abs, average, count, dateTime, julianDay, length, max, min, sum and total Swift functions:

    // SELECT MIN(score), MAX(score) FROM player
    Player.select(min(scoreColumn), max(scoreColumn))
    
    // SELECT COUNT(name) FROM player
    Player.select(count(nameColumn))
    
    // SELECT COUNT(DISTINCT name) FROM player
    Player.select(count(distinct: nameColumn))
    
    // SELECT JULIANDAY(date, 'start of year') FROM game
    Game.select(julianDay(dateColumn, .startOfYear))
    

    For more information about the functions dateTime and julianDay, see Date And Time Functions.

  • IFNULL

    Use the Swift ?? operator:

    // SELECT IFNULL(name, 'Anonymous') FROM player
    Player.select(nameColumn ?? "Anonymous")
    
    // SELECT IFNULL(name, email) FROM player
    Player.select(nameColumn ?? emailColumn)
    
  • LOWER, UPPER

    The query interface does not give access to those SQLite functions. Nothing against them, but they are not unicode aware.

    Instead, GRDB extends SQLite with SQL functions that call the Swift built-in string functions capitalized, lowercased, uppercased, localizedCapitalized, localizedLowercased and localizedUppercased:

    Player.select(nameColumn.uppercased())
    

    :point_up: Note: When comparing strings, you’d rather use a collation:

    let name: String = ...
    
    // Not recommended
    nameColumn.uppercased() == name.uppercased()
    
    // Better
    nameColumn.collating(.caseInsensitiveCompare) == name
    
  • Custom SQL functions and aggregates

    You can apply your own custom SQL functions and aggregates:

    let f = DatabaseFunction("f", ...)
    
    // SELECT f(name) FROM player
    Player.select(f.apply(nameColumn))
    

Embedding SQL in Query Interface Requests

You will sometimes want to extend your query interface requests with SQL snippets. This can happen because GRDB does not provide a Swift interface for some SQL function or operator, or because you want to use an SQLite construct that GRDB does not support.

Support for extensibility is large, but not unlimited. All the SQL queries built by the query interface request have the shape below. If you need something else, you’ll have to use raw SQL requests.

WITH ...     -- 1
SELECT ...   -- 2
FROM ...     -- 3
JOIN ...     -- 4
WHERE ...    -- 5
GROUP BY ... -- 6
HAVING ...   -- 7
ORDER BY ... -- 8
LIMIT ...    -- 9
  1. WITH ...: see Common Table Expressions.

  2. SELECT ...

    The selection can be provided as raw SQL:

    // SELECT IFNULL(name, 'O''Brien'), score FROM player
    let request = Player.select(sql: "IFNULL(name, 'O''Brien'), score")
    
    // SELECT IFNULL(name, 'O''Brien'), score FROM player
    let defaultName = "O'Brien"
    let request = Player.select(sql: "IFNULL(name, ?), score", arguments: [suffix])
    

    The selection can be provided with SQL Interpolation:

    // SELECT IFNULL(name, 'O''Brien'), score FROM player
    let defaultName = "O'Brien"
    let request = Player.select(literal: "IFNULL(name, \(defaultName)), score")
    

    The selection can be provided with a mix of Swift and SQL Interpolation:

    // SELECT IFNULL(name, 'O''Brien') AS displayName, score FROM player
    let defaultName = "O'Brien"
    let displayName: SQL = "IFNULL(\(Column("name")), \(defaultName)) AS displayName"
    let request = Player.select(displayName, Column("score"))
    

    When the custom SQL snippet should behave as a full-fledged expression, with support for the + Swift operator, the forKey aliasing method, and all other SQL Operators, build an expression literal with the SQL.sqlExpression method:

    // SELECT IFNULL(name, 'O''Brien') AS displayName, score FROM player
    let defaultName = "O'Brien"
    let displayName = SQL("IFNULL(\(Column("name")), \(defaultName))").sqlExpression
    let request = Player.select(displayName.forKey("displayName"), Column("score"))
    

    Such expression literals allow you to build a reusable support library of SQL functions or operators that are missing from the query interface. For example, you can define a Swift date function:

    func date(_ value: SQLSpecificExpressible) -> SQLExpression {
        SQL("DATE(\(value))").sqlExpression
    }
    
    // SELECT * FROM "player" WHERE DATE("createdAt") = '2020-01-23'
    let request = Player.filter(date(Column("createdAt")) == "2020-01-23")
    

    See the Query Interface Organization for more information about SQLSpecificExpressible and SQLExpression.

  3. FROM ...: only one table is supported here. You can not customize this SQL part.

  4. JOIN ...: joins are fully controlled by Associations. You can not customize this SQL part.

  5. WHERE ...

    The WHERE clause can be provided as raw SQL:

    // SELECT * FROM player WHERE score >= 1000
    let request = Player.filter(sql: "score >= 1000")
    
    // SELECT * FROM player WHERE score >= 1000
    let minScore = 1000
    let request = Player.filter(sql: "score >= ?", arguments: [minScore])
    

    The WHERE clause can be provided with SQL Interpolation:

    // SELECT * FROM player WHERE score >= 1000
    let minScore = 1000
    let request = Player.filter(literal: "score >= \(minScore)")
    

    The WHERE clause can be provided with a mix of Swift and SQL Interpolation:

    // SELECT * FROM player WHERE (score >= 1000) AND (team = 'red')
    let minScore = 1000
    let scoreCondition: SQL = "\(Column("score")) >= \(minScore)"
    let request = Player.filter(scoreCondition && Column("team") == "red")
    

    See SELECT ... above for more SQL Interpolation examples.

  6. GROUP BY ...

    The GROUP BY clause can be provided as raw SQL, SQL Interpolation, or a mix of Swift and SQL Interpolation, just as the selection and the WHERE clause (see above).

  7. HAVING ...

    The HAVING clause can be provided as raw SQL, SQL Interpolation, or a mix of Swift and SQL Interpolation, just as the selection and the WHERE clause (see above).

  8. ORDER BY ...

    The ORDER BY clause can be provided as raw SQL, SQL Interpolation, or a mix of Swift and SQL Interpolation, just as the selection and the WHERE clause (see above).

    In order to support the desc and asc query interface operators, and the reversed() query interface method, you must provide your orderings as expression literals with the SQL.sqlExpression method:

    // SELECT * FROM "player" 
    // ORDER BY (score + bonus) ASC, name DESC
    let total = SQL("(score + bonus)").sqlExpression
    let request = Player
        .order(total.desc, Column("name"))
        .reversed()
    
  9. LIMIT ...: use the limit(_:offset:) method. You can not customize this SQL part.

Fetching from Requests

Once you have a request, you can fetch the records at the origin of the request:

// Some request based on `Player`
let request = Player.filter(...)... // QueryInterfaceRequest<Player>

// Fetch players:
try request.fetchCursor(db) // A Cursor of Player
try request.fetchAll(db)    // [Player]
try request.fetchSet(db)    // Set<Player>
try request.fetchOne(db)    // Player?

For example:

let allPlayers = try Player.fetchAll(db)                            // [Player]
let arthur = try Player.filter(nameColumn == "Arthur").fetchOne(db) // Player?

See fetching methods for information about the fetchCursor, fetchAll, fetchSet and fetchOne methods.

You sometimes want to fetch other values.

The simplest way is to use the request as an argument to a fetching method of the desired type:

// Fetch an Int
let request = Player.select(max(scoreColumn))
let maxScore = try Int.fetchOne(db, request) // Int?

// Fetch a Row
let request = Player.select(min(scoreColumn), max(scoreColumn))
let row = try Row.fetchOne(db, request)!     // Row
let minScore = row[0] as Int?
let maxScore = row[1] as Int?

You can also change the request so that it knows the type it has to fetch:

  • With asRequest(of:), useful when you use Associations:

    struct BookInfo: FetchableRecord, Decodable {
        var book: Book
        var author: Author
    }
    
    // A request of BookInfo
    let request = Book
        .including(required: Book.author)
        .asRequest(of: BookInfo.self)
    
    let bookInfos = try dbQueue.read { db in
        try request.fetchAll(db) // [BookInfo]
    }
    
  • With select(..., as:), which is handy when you change the selection:

    // A request of Int
    let request = Player.select(max(scoreColumn), as: Int.self)
    
    let maxScore = try dbQueue.read { db in
        try request.fetchOne(db) // Int?
    }
    

Fetching by Key

Fetching records according to their primary key is a common task.

Identifiable Records can use the type-safe methods fetchOne(_:id:), fetchAll(_:ids:) and fetchSet(_:ids:):

try Player.fetchOne(db, id: 1)               // Player?
try Country.fetchAll(db, ids: ["FR", "US"])  // [Countries]

All record types can use fetchOne(_:key:), fetchAll(_:keys:) and fetchSet(_:keys:) that apply conditions on primary and unique keys:

try Player.fetchOne(db, key: 1)              // Player?
try Country.fetchAll(db, keys: ["FR", "US"]) // [Country]
try Player.fetchOne(db, key: ["email": "arthur@example.com"])            // Player?
try Citizenship.fetchOne(db, key: ["citizenId": 1, "countryCode": "FR"]) // Citizenship?

When the table has no explicit primary key, GRDB uses the hidden “rowid” column:

// SELECT * FROM document WHERE rowid = 1
try Document.fetchOne(db, key: 1)            // Document?

When you want to build a request and plan to fetch from it later, use a filter method:

let request = Player.filter(id: 1)
let request = Country.filter(ids: ["FR", "US"])
let request = Player.filter(key: ["email": "arthur@example.com"])
let request = Citizenship.filter(key: ["citizenId": 1, "countryCode": "FR"])

Testing for Record Existence

You can check if a request has matching rows in the database.

// Some request based on `Player`
let request = Player.filter(...)...

// Check for player existence:
let noSuchPlayer = try request.isEmpty(db) // Bool

You should check for emptiness instead of counting:

// Correct
let noSuchPlayer = try request.fetchCount(db) == 0
// Even better
let noSuchPlayer = try request.isEmpty(db)

You can also check if a given primary or unique key exists in the database.

Identifiable Records can use the type-safe method exists(_:id:):

try Player.exists(db, id: 1)
try Country.exists(db, id: "FR")

All record types can use exists(_:key:) that can check primary and unique keys:

try Player.exists(db, key: 1)
try Country.exists(db, key: "FR")
try Player.exists(db, key: ["email": "arthur@example.com"])
try Citizenship.exists(db, key: ["citizenId": 1, "countryCode": "FR"])

You should check for key existence instead of fetching a record and checking for nil:

// Correct
let playerExists = try Player.fetchOne(db, id: 1) != nil
// Even better
let playerExists = try Player.exists(db, id: 1)

Fetching Aggregated Values

Requests can count. The fetchCount() method returns the number of rows that would be returned by a fetch request:

// SELECT COUNT(*) FROM player
let count = try Player.fetchCount(db) // Int

// SELECT COUNT(*) FROM player WHERE email IS NOT NULL
let count = try Player.filter(emailColumn != nil).fetchCount(db)

// SELECT COUNT(DISTINCT name) FROM player
let count = try Player.select(nameColumn).distinct().fetchCount(db)

// SELECT COUNT(*) FROM (SELECT DISTINCT name, score FROM player)
let count = try Player.select(nameColumn, scoreColumn).distinct().fetchCount(db)

Other aggregated values can also be selected and fetched (see SQL Functions):

let request = Player.select(max(scoreColumn))
let maxScore = try Int.fetchOne(db, request) // Int?

let request = Player.select(min(scoreColumn), max(scoreColumn))
let row = try Row.fetchOne(db, request)!     // Row
let minScore = row[0] as Int?
let maxScore = row[1] as Int?

Delete Requests

Requests can delete records, with the deleteAll() method:

// DELETE FROM player
try Player.deleteAll(db)

// DELETE FROM player WHERE team = 'red'
try Player
    .filter(teamColumn == "red")
    .deleteAll(db)

// DELETE FROM player ORDER BY score LIMIT 10
try Player
    .order(scoreColumn)
    .limit(10)
    .deleteAll(db)

:point_up: Note Deletion methods are available on types that adopt the TableRecord protocol, and Table:

struct Player: TableRecord { ... }
try Player.deleteAll(db)          // Fine
try Table("player").deleteAll(db) // Just as fine

Deleting records according to their primary key is a common task.

Identifiable Records can use the type-safe methods deleteOne(_:id:) and deleteAll(_:ids:):

try Player.deleteOne(db, id: 1)
try Country.deleteAll(db, ids: ["FR", "US"])

All record types can use deleteOne(_:key:) and deleteAll(_:keys:) that apply conditions on primary and unique keys:

try Player.deleteOne(db, key: 1)
try Country.deleteAll(db, keys: ["FR", "US"])
try Player.deleteOne(db, key: ["email": "arthur@example.com"])
try Citizenship.deleteOne(db, key: ["citizenId": 1, "countryCode": "FR"])

When the table has no explicit primary key, GRDB uses the hidden “rowid” column:

// DELETE FROM document WHERE rowid = 1
try Document.deleteOne(db, id: 1)             // Document?

Update Requests

Requests can batch update records. The updateAll() method accepts column assignments defined with the set(to:) method:

// UPDATE player SET score = 0, isHealthy = 1, bonus = NULL
try Player.updateAll(db, 
    Column("score").set(to: 0), 
    Column("isHealthy").set(to: true), 
    Column("bonus").set(to: nil))

// UPDATE player SET score = 0 WHERE team = 'red'
try Player
    .filter(Column("team") == "red")
    .updateAll(db, Column("score").set(to: 0))

// UPDATE player SET top = 1 ORDER BY score DESC LIMIT 10
try Player
    .order(Column("score").desc)
    .limit(10)
    .updateAll(db, Column("top").set(to: true))

// UPDATE country SET population = 67848156 WHERE id = 'FR'
try Country
    .filter(id: "FR")
    .updateAll(db, Column("population").set(to: 67_848_156))

Column assignments accept any expression:

// UPDATE player SET score = score + (bonus * 2)
try Player.updateAll(db, Column("score").set(to: Column("score") + Column("bonus") * 2))

As a convenience, you can also use the +=, -=, *=, or /= operators:

// UPDATE player SET score = score + (bonus * 2)
try Player.updateAll(db, Column("score") += Column("bonus") * 2)

Default Conflict Resolution rules apply, and you may also provide a specific one:

// UPDATE OR IGNORE player SET ...
try Player.updateAll(db, onConflict: .ignore, /* assignments... */)

:point_up: Note The updateAll method is available on types that adopt the TableRecord protocol, and Table:

struct Player: TableRecord { ... }
try Player.updateAll(db, ...)          // Fine
try Table("player").updateAll(db, ...) // Just as fine

Custom Requests

Until now, we have seen requests created from any type that adopts the TableRecord protocol:

let request = Player.all()  // QueryInterfaceRequest<Player>

Those requests of type QueryInterfaceRequest can fetch and count:

try request.fetchCursor(db) // A Cursor of Player
try request.fetchAll(db)    // [Player]
try request.fetchSet(db)    // Set<Player>
try request.fetchOne(db)    // Player?
try request.fetchCount(db)  // Int

When the query interface can not generate the SQL you need, you can still fallback to raw SQL:

// Custom SQL is always welcome
try Player.fetchAll(db, sql: "SELECT ...")   // [Player]

But you may prefer to bring some elegance back in, and build custom requests:

// No custom SQL in sight
try Player.customRequest().fetchAll(db) // [Player]

To build custom requests, you can use one of the built-in requests or derive requests from other requests.

  • SQLRequest is a fetch request built from raw SQL. For example:

    extension Player {
        static func filter(color: Color) -> SQLRequest<Player> {
            SQLRequest<Player>(
                sql: "SELECT * FROM player WHERE color = ?"
                arguments: [color])
        }
    }
    
    // [Player]
    try Player.filter(color: .red).fetchAll(db)
    

    SQLRequest supports SQL Interpolation:

    extension Player {
        static func filter(color: Color) -> SQLRequest<Player> {
            "SELECT * FROM player WHERE color = \(color)"
        }
    }
    
  • The asRequest(of:) method changes the type fetched by the request. It is useful, for example, when you use Associations:

    struct BookInfo: FetchableRecord, Decodable {
        var book: Book
        var author: Author
    }
    
    let request = Book
        .including(required: Book.author)
        .asRequest(of: BookInfo.self)
    
    // [BookInfo]
    try request.fetchAll(db)
    
  • The adapted(_:) method eases the consumption of complex rows with row adapters. See Joined Queries Support for some sample code that uses this method.

  • AnyFetchRequest: a type-erased request.

Joined Queries Support

GRDB helps consuming joined queries with complex selection.

In this chapter, we will focus on the extraction of information from complex rows, such as the ones fetched by the query below:

-- How to consume the left, middle, and right parts of those rows?
SELECT player.*, team.*, MAX(round.score) AS maxScore
FROM player
LEFT JOIN team ON ...
LEFT JOIN round ON ...
GROUP BY ...

We will not talk about the generation of joined queries, which is covered in Associations.

So what are we talking about?

It is difficult to consume rows fetched from complex joined queries, because they often contain several columns with the same name: id from table player, id from table team, etc.

When such ambiguity happens, GRDB row accessors always favor the leftmost matching column. This means that row["id"] would give a player id, without any obvious way to access the team id.

A classical technique to avoid this ambiguity is to give each column a unique name. For example:

-- A classical technique
SELECT player.id AS player_id, player.name AS player_name, team.id AS team_id, team.name AS team_name, team.color AS team_color, MAX(round.score) AS maxScore
FROM player
LEFT JOIN team ON ...
LEFT JOIN round ON ...
GROUP BY ...

This technique works pretty well, but it has three drawbacks:

  1. The selection becomes hard to read and understand.
  2. Such queries are difficult to write by hand.
  3. The mangled names are a very bad fit for FetchableRecord types that expect specific column names. After all, if the Team record type can read SELECT * FROM team ..., it should be able to read SELECT ..., team.*, ... as well.

We thus need another technique. Below we’ll see how to split rows into slices, and preserve column names.

SELECT player.*, team.*, MAX(round.score) AS maxScore FROM ... will be split into three slices: one that contains player’s columns, one that contains team’s columns, and a remaining slice that contains remaining column(s). The Player record type will be able to read the first slice, which contains the columns expected by the Player.init(row:) initializer. In the same way, the Team record type could read the second slice.

Unlike the name-mangling technique, splitting rows keeps SQL legible, accepts your hand-crafted SQL queries, and plays as nicely as possible with your existing record types.

Splitting Rows, an Introduction

Let’s first write some introductory code, hoping that this chapter will make you understand how pieces fall together. We’ll see later how records will help us streamline the initial approach, how to track changes in joined requests, and how we can use the standard Decodable protocol.

To split rows, we will use row adapters. Row adapters adapt rows so that row consumers see exactly the columns they want. Among other things, row adapters can define several row scopes that give access to as many row slices. Sounds like a perfect match.

At the very beginning, there is an SQL query:

try dbQueue.read { db in
    let sql = """
        SELECT player.*, team.*, MAX(round.score) AS maxScore
        FROM player
        LEFT JOIN team ON ...
        LEFT JOIN round ON ...
        GROUP BY ...
        """

We need an adapter that extracts player columns, in a slice that has as many columns as there are columns in the player table. That’s RangeRowAdapter:

    // SELECT player.*, team.*, ...
    //        <------>
    let playerWidth = try db.columns(in: "player").count
    let playerAdapter = RangeRowAdapter(0 ..< playerWidth)

We also need an adapter that extracts team columns:

    // SELECT player.*, team.*, ...
    //                  <---->
    let teamWidth = try db.columns(in: "team").count
    let teamAdapter = RangeRowAdapter(playerWidth ..< (playerWidth + teamWidth))

We merge those two adapters in a single ScopeAdapter that will allow us to access both sliced rows:

    let playerScope = "player"
    let teamScope = "team"
    let adapter = ScopeAdapter([
        playerScope: playerAdapter,
        teamScope: teamAdapter])

And now we can fetch, and start consuming our rows. You already know row cursors:

    let rows = try Row.fetchCursor(db, sql: sql, adapter: adapter)
    while let row = try rows.next() {

From a fetched row, we can build a player:

        let player: Player = row[playerScope]

In the SQL query, the team is joined with the LEFT JOIN operator. This means that the team may be missing: its slice may contain team values, or it may only contain NULLs. When this happens, we don’t want to build a Team record, and we thus load an optional Team:

        let team: Team? = row[teamScope]

And finally, we can load the maximum score, assuming that the “maxScore” column is not ambiguous:

        let maxScore: Int = row["maxScore"]

        print("player: \(player)")
        print("team: \(team)")
        print("maxScore: \(maxScore)")
    }
}

:bulb: In this chapter, we have learned:

  • how to use RangeRowAdapter to extract a specific table’s columns into a row slice.
  • how to use ScopeAdapter to gives access to several row slices through named scopes.
  • how to use Row subscripting to extract records from rows, or optional records in order to deal with left joins.

Splitting Rows, the Record Way

Our introduction above has introduced important techniques. It uses row adapters in order to split rows. It uses Row subscripting in order to extract records from row slices.

But we may want to make it more usable and robust:

  1. It’s generally easier to consume records than raw rows.
  2. Joined records not always need all columns from a table (see TableRecord.databaseSelection in Columns Selected by a Request).
  3. Building row adapters is long and error prone.

To address the first bullet, let’s define a record that holds our player, optional team, and maximum score. Since it can decode database rows, it adopts the FetchableRecord protocol:

struct PlayerInfo {
    var player: Player
    var team: Team?
    var maxScore: Int
}

/// PlayerInfo can decode rows:
extension PlayerInfo: FetchableRecord {
    private enum Scopes {
        static let player = "player"
        static let team = "team"
    }

    init(row: Row) {
        player = row[Scopes.player]
        team = row[Scopes.team]
        maxScore = row["maxScore"]
    }
}

Now we write a method that returns a custom request, and then build the fetching method on top of that request:

extension PlayerInfo {
    /// The request for all player infos
    static func all() -> AdaptedFetchRequest<SQLRequest<PlayerInfo>> {

To acknowledge that both Player and Team records may customize their selection of the “player” and “team” columns, we’ll write our SQL in a slightly different way:

        // Let Player and Team customize their selection:
        let request: SQLRequest<PlayerInfo> = """
            SELECT
                \(columnsOf: Player.self), -- instead of player.*
                \(columnsOf: Team.self),   -- instead of team.*
                MAX(round.score) AS maxScore
            FROM player
            LEFT JOIN team ON ...
            LEFT JOIN round ON ...
            GROUP BY ...
            """

Our SQL is no longer a regular String, but an SQLRequest<PlayerInfo> which profits from SQL Interpolation. Inside this request, \(columnsOf: Player.self) outputs player.*, unless Player defines a customized selection.

Now we need to build adapters.

We use the splittingRowAdapters global function, whose job is precisely to build row adapters of desired widths:

And since counting table columns require a database connection, we use the adapted(_:) request method. It allows requests to adapt themselves right before execution, when a database connection is available.

        return request.adapted { db in
            let adapters = try splittingRowAdapters(columnCounts: [
                Player.numberOfSelectedColumns(db),
                Team.numberOfSelectedColumns(db)])
            return ScopeAdapter([
                Scopes.player: adapters[0],
                Scopes.team: adapters[1]])
        }
    }

:point_up: Note: splittingRowAdapters returns as many adapters as necessary to fully split a row. In the example above, it returns three adapters: one for player, one for team, and one for the remaining columns.

And finally, we can define the fetching method:

    /// Fetches all player infos
    static func fetchAll(_ db: Database) throws -> [PlayerInfo] {
        try all().fetchAll(db)
    }
}

And when your app needs to fetch player infos, it now reads:

// Fetch player infos
let playerInfos = try dbQueue.read { db in
    try PlayerInfo.fetchAll(db)
}

:bulb: In this chapter, we have learned:

  • how to define a FetchableRecord record that consumes rows fetched from a joined query.
  • how to use SQL Interpolation and numberOfSelectedColumns in order to deal with nested record types that define custom selection.
  • how to use splittingRowAdapters in order to streamline the definition of row slices.
  • how to gather all relevant methods and constants in a record type, fully responsible of its relationship with the database.

Splitting Rows, the Codable Way

Codable Records build on top of the standard Decodable protocol in order to decode database rows.

You can consume complex joined queries with Codable records as well. As a demonstration, we’ll rewrite the above sample code:

struct Player: Decodable, FetchableRecord, TableRecord {
    var id: Int64
    var name: String
}
struct Team: Decodable, FetchableRecord, TableRecord {
    var id: Int64
    var name: String
    var color: Color
}
struct PlayerInfo: Decodable, FetchableRecord {
    var player: Player
    var team: Team?
    var maxScore: Int
}

extension PlayerInfo {
    /// The request for all player infos
    static func all() -> AdaptedFetchRequest<SQLRequest<PlayerInfo>> {
        let request: SQLRequest<PlayerInfo> = """
            SELECT
                \(columnsOf: Player.self),
                \(columnsOf: Team.self),
                MAX(round.score) AS maxScore
            FROM player
            LEFT JOIN team ON ...
            LEFT JOIN round ON ...
            GROUP BY ...
            """
        return request.adapted { db in
            let adapters = try splittingRowAdapters(columnCounts: [
                Player.numberOfSelectedColumns(db),
                Team.numberOfSelectedColumns(db)])
            return ScopeAdapter([
                CodingKeys.player.stringValue: adapters[0],
                CodingKeys.team.stringValue: adapters[1]])
        }
    }

    /// Fetches all player infos
    static func fetchAll(_ db: Database) throws -> [PlayerInfo] {
        try all().fetchAll(db)
    }
}

// Fetch player infos
let playerInfos = try dbQueue.read { db in
    try PlayerInfo.fetchAll(db)
}

:bulb: In this chapter, we have learned how to use the Decodable protocol and its associated CodingKeys enum in order to dry up our code.

Database Changes Observation

SQLite notifies its host application of changes performed to the database, as well of transaction commits and rollbacks.

GRDB puts this SQLite feature to some good use, and lets you observe the database in various ways:

Database observation requires that a single database queue or pool is kept open for all the duration of the database usage.

After Commit Hook

When your application needs to make sure a specific database transaction has been successfully committed before it executes some work, use the Database.afterNextTransactionCommit(_:) method.

Its closure argument is called right after database changes have been successfully written to disk:

try dbQueue.write { db in
    db.afterNextTransactionCommit { db in
        print("success")
    }
    ...
} // prints "success"

The closure runs in a protected dispatch queue, serialized with all database updates.

This “after commit hook” helps synchronizing the database with other resources, such as files, or system sensors.

In the example below, a location manager starts monitoring a CLRegion if and only if it has successfully been stored in the database:

/// Inserts a region in the database, and start monitoring upon
/// successful insertion.
func startMonitoring(_ db: Database, region: CLRegion) throws {
    // Make sure database is inside a transaction
    try db.inSavepoint {

        // Save the region in the database
        try insert(...)

        // Start monitoring if and only if the insertion is
        // eventually committed
        db.afterNextTransactionCommit { _ in
            // locationManager prefers the main queue:
            DispatchQueue.main.async {
                locationManager.startMonitoring(for: region)
            }
        }

        return .commit
    }
}

The method above won’t trigger the location manager if the transaction is eventually rollbacked (explicitly, or because of an error), as in the sample code below:

try dbQueue.write { db in
    // success
    try startMonitoring(db, region)

    // On error, the transaction is rollbacked, the region is not inserted, and
    // the location manager is not invoked.
    try failableMethod(db)
}

ValueObservation

ValueObservation tracks changes in database values. It automatically notifies your application with fresh values whenever changes are committed in the database.

Tracked changes are insertions, updates, and deletions that impact the tracked value, performed with the query interface, or raw SQL. This includes indirect changes triggered by foreign keys actions or SQL triggers.

:point_up: Note: Some changes are not notified: changes to internal system tables (such as sqlite\_master), and changes to WITHOUT ROWID tables.

ValueObservation is the preferred GRDB tool for keeping your user interface synchronized with the database. See the Demo Applications for sample code.

ValueObservation Usage

  1. Make sure that a unique database connection is kept open during the whole duration of the observation.

    ValueObservation does not notify changes performed by external connections.

  2. Define a ValueObservation by providing a function that fetches the observed value.

    let observation = ValueObservation.tracking { db in
        /* fetch and return the observed value */
    }
    
    // For example, an observation of [Player], which tracks all players:
    let observation = ValueObservation.tracking { db in
        try Player.fetchAll(db)
    }
    
    // The same observation, using shorthand notation:
    let observation = ValueObservation.tracking(Player.fetchAll)
    

    The observation can perform multiple requests, from multiple database tables, and even use raw SQL.

    Example of a more complex ValueObservation “`swift struct HallOfFame { var totalPlayerCount: Int var bestPlayers: [Player] } // An observation of HallOfFame let observation = ValueObservation.tracking { db -> HallOfFame in let totalPlayerCount = try Player.fetchCount(db) let bestPlayers = try Player .order(Column("score”).desc) .limit(10) .fetchAll(db) return HallOfFame( totalPlayerCount: totalPlayerCount, bestPlayers: bestPlayers) } “`
    Example of a SQL ValueObservation ”`swift // An observation of the maximum score let observation = ValueObservation.tracking { db in try Int.fetchOne(db, sql: “SELECT MAX(score) FROM player”) } “`
  3. Start the observation in order to be notified of changes:

    // Start observing the database
    let cancellable: DatabaseCancellable = observation.start(
        in: dbQueue, // or dbPool
        onError: { error in print("players could not be fetched") },
        onChange: { (players: [Player]) in print("fresh players", players) })
    
  4. Stop the observation by calling the cancel() method on the object returned by the start method. Cancellation is automatic when the cancellable is deinitialized:

    cancellable.cancel()
    

As a convenience, ValueObservation can be turned into an async sequence, a Combine publisher, or an RxSwift observable:

Async sequence example [**:fire: EXPERIMENTAL**](#what-are-experimental-features) ”`swift let observation = ValueObservation.tracking(Player.fetchAll) for try await players in observation.values(in: dbQueue) { print(“fresh players”, players) } “`
Combine example ”`swift import Combine import GRDB let observation = ValueObservation.tracking(Player.fetchAll) let cancellable = observation.publisher(in: dbQueue).sink( receiveCompletion: { completion in … }, receiveValue: { (players: [Player]) in print(“fresh players”, players) }) “` See [Combine Support] for more information.
RxSwift example ”`swift import GRDB import RxGRDB import RxSwift let observation = ValueObservation.tracking(Player.fetchAll) let disposable = observation.rx.observe(in: dbQueue).subscribe( onNext: { (players: [Player]) in print(“fresh players”, players) }, onError: { error in … }) “` See the companion library [RxGRDB] for more information.

Generally speaking:

  • ValueObservation notifies an initial value before the eventual changes.
  • ValueObservation only notifies changes committed to disk.
  • By default, ValueObservation notifies a fresh value whenever any of its component is modified (any fetched column, row, etc.). This can be configured.
  • By default, ValueObservation notifies the initial value, as well as eventual changes and errors, on the main thread, asynchronously. This can be configured.
  • ValueObservation may coalesce subsequent changes into a single notification.
  • ValueObservation may notify consecutive identical values. You can filter out the undesired duplicates with the removeDuplicates method.
  • Starting an observation retains the database connection, until it is stopped. As long as the observation is active, the database connection won’t be deallocated.
  • The database observation stops when any of those conditions is met:
    • The cancellable returned by the start method is cancelled or deinitialized.
    • An error occurs.

Take care that there are use cases that ValueObservation is unfit for. For example, your application may need to process absolutely all changes, and avoid any coalescing. It may also need to process changes before any further modifications are performed in the database file. In those cases, you need to track individual transactions, not values. See DatabaseRegionObservation. If you need to process uncommitted changes, see TransactionObserver.

ValueObservation Scheduling

By default, ValueObservation notifies the initial value, as well as eventual changes and errors, on the main thread, asynchronously:

// The default scheduling
let cancellable = observation.start(
    in: dbQueue,
    onError: { error in ... },                   // called asynchronously on the main thread
    onChange: { value in print("fresh value") }) // called asynchronously on the main thread

You can change this behavior by adding a scheduling argument to the start() method.

For example, scheduling: .immediate makes sure the initial value is notified immediately when the observation starts. It helps your application update the user interface without having to wait for any asynchronous notifications:

class PlayersViewController: UIViewController {
    private var cancellable: DatabaseCancellable?

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)

        // Start observing the database
        let observation = ValueObservation.tracking(Player.fetchAll)
        cancellable = observation.start(
            in: dbQueue,
            scheduling: .immediate, // <- immediate scheduler
            onError: { error in ... },
            onChange: { [weak self] (players: [Player]) in
                guard let self = self else { return }
                self.updateView(players)
            })
        // <- Here the view has already been updated.
    }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)

        // Stop observing the database
        cancellable?.cancel()
    }

    private func updateView(_ players: [Player]) { ... }
}

Note that the .immediate scheduling requires that the observation starts from the main thread. A fatal error is raised otherwise.

The other built-in scheduler .async(onQueue:) asynchronously schedules values and errors on the dispatch queue of your choice:

let queue: DispatchQueue = ...
let cancellable = observation.start(
    in: dbQueue,
    scheduling: .async(onQueue: queue)
    onError: { error in ... },                   // called asynchronously on queue
    onChange: { value in print("fresh value") }) // called asynchronously on queue

ValueObservation Operators

Operators are methods that transform and configure value observations so that they better fit the needs of your application.

Debugging Operators

ValueObservation.map

The map operator transforms the values notified by a ValueObservation.

For example:

// Turn an observation of Player? into an observation of UIImage?
let observation = ValueObservation
    .tracking { db in try Player.fetchOne(db, id: 42) }
    .map { player in player?.image }

The transformation function does not block any database access. This makes the map operator a tool which helps reducing database contention.

ValueObservation.removeDuplicates

The removeDuplicates() and removeDuplicates(by:) operators filter out consecutive equal values:

For example:

// An observation of distinct Player?
let observation = ValueObservation
    .tracking { db in try Player.fetchOne(db, id: 42) }
    .removeDuplicates()

:bulb: Tip: When the observed value does not adopt Equatable, and it is impractical to provide a custom comparison function, you can observe distinct raw database values such as Row or DatabaseValue, before converting them to the desired type. For example, the previous observation can be rewritten as below:

// An observation of distinct Player?
let request = Player.filter(id: 42)
let observation = ValueObservation
    .tracking { db in try Row.fetchOne(db, request) }
    .removeDuplicates() // Row adopts Equatable
    .map { row in row.map(Player.init(row:) }

This technique is also available for requests that involve Associations:

struct TeamInfo: Decodable, FetchableRecord {
    var team: Team
    var players: [Player]
}

// An observation of distinct [TeamInfo]
let request = Team.including(all: Team.players)
let observation = ValueObservation
    .tracking { db in try Row.fetchAll(db, request) }
    .removeDuplicates() // Row adopts Equatable
    .map { rows in rows.map(TeamInfo.init(row:) }

ValueObservation.requiresWriteAccess

The requiresWriteAccess property is false by default. When true, a ValueObservation has a write access to the database, and its fetches are automatically wrapped in a savepoint:

var observation = ValueObservation.tracking { db in
    // write access allowed
    ...
}
observation.requiresWriteAccess = true

When you use a database pool, this flag has a performance hit.

ValueObservation.handleEvents

The handleEvents operator lets your application observe the lifetime of a ValueObservation:

let observation = ValueObservation
    .tracking { db in ... }
    .handleEvents(
        willStart: {
            // The observation starts.
        },
        willFetch: {
            // The observation will perform a database fetch.
        },
        willTrackRegion: { databaseRegion in
            // The observation starts tracking a database region.
        },
        databaseDidChange: {
            // The observation was impacted by a database change.
        },
        didReceiveValue: { value in
            // A fresh value was observed.
            // NOTE: This closure runs on an unspecified DispatchQueue.
        },
        didFail: { error in
            // The observation completes with an error.
        },
        didCancel: {
            // The observation was cancelled.
        })

See also ValueObservation.print.

ValueObservation.print

The print operator logs messages for all ValueObservation events.

let observation = ValueObservation
    .tracking { db in ... }
    .print()

See also ValueObservation.handleEvents.

ValueObservation Sharing

:fire: EXPERIMENTAL

Sharing a ValueObservation allows several components of your app to be notified of database changes, in an efficient way.

A shared observation spares database resources. For example, when a database change happens, a fresh value is fetched only once, and then notified to all subscriptions.

Usage:

// SharedValueObservation<[Player]>
let sharedObservation = ValueObservation
    .tracking { db in try Player.fetchAll(db) }
    .shared(in: dbQueue)
//  ~~~~~~~~~~~~~~~~~~~~

let cancellable = try sharedObservation.start(
    onError: { error in ... },
    onChange: { players: [Player] in
        print("fresh players: \(players)")
    })

The sharing only applies if you start observing the database from the same SharedValueObservation instance:

// NOT shared
let cancellable1 = ValueObservation.tracking { db in ... }.shared(in: dbQueue).start(...)
let cancellable2 = ValueObservation.tracking { db in ... }.shared(in: dbQueue).start(...)

// Shared
let sharedObservation = ValueObservation.tracking { db in ... }.shared(in: dbQueue)
let cancellable1 = sharedObservation.start(...)
let cancellable2 = sharedObservation.start(...)

By default, fresh values are dispatched asynchronously on the main queue. You can change this behavior (see ValueObservation Scheduling for more information):

let sharedObservation = ValueObservation
    .tracking { db in try Player.fetchAll(db) }
    .shared(in: dbQueue, scheduling: .immediate)
//                       ~~~~~~~~~~~~~~~~~~~~~~

A shared observation starts observing the database as soon as it is subscribed. You can choose if database observation should stop, or not, when its number of subscriptions drops down to zero, with the extent parameter:

// The default: stops observing the database when the number of subscriptions 
// drops down to zero, and restart database observation on the next subscription.
//
// Database errors can be recovered by resubscribing to the shared observation.
let sharedObservation = ValueObservation
    .tracking { db in try Player.fetchAll(db) }
    .shared(in: dbQueue, extent: .whileObserved)
//                       ~~~~~~~~~~~~~~~~~~~~~~

// Only stops observing the database when the shared observation is deinitialized,
// and all subscriptions are cancelled.
//
// This extent prevents the shared observation from recovering from database
// errors. To recover from database errors, create a new shared
// SharedValueObservation instance.
let sharedObservation = ValueObservation
    .tracking { db in try Player.fetchAll(db) }
    .shared(in: dbQueue, extent: .observationLifetime)
//                       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

:point_up: Note: ValueObservation and SharedValueObservation are nearly identical, but there is a difference you should be aware of. SharedValueObservation has no operator such as map. As a replacement, you may, for example, use Combine apis:

let sharedObservation = ValueObservation.tracking { ... }.shared(in: dbQueue)
let cancellable = try sharedObservation
    .publisher() // Turn shared observation into a Combine Publisher
    .map { ... } // The map operator from Combine
    .sink(...)

Specifying the Region Tracked by ValueObservation

While the standard ValueObservation.tracking { db in ... } method lets you track changes to a fetched value and receive any changes to it, sometimes your use case might require more granular control.

Consider a scenario where you’d like to get a specific Player’s row, but only when their score column changes. You can use tracking(region:fetch:) to do just that:

let observation = ValueObservation.tracking(
    // Define what database region constitutes a "change"
    region: Player.select(Column("score")).filter(id: 1),
    // Define what to fetch upon such change
    fetch: { db in try Player.fetchOne(db, id: 1) }
)

This overload of ValueObservation lets you entirely separate the observed region from the fetched value itself, providing utmost flexibility. See DatabaseRegionConvertible for more information.

ValueObservation Performance

This chapter further describes runtime aspects of ValueObservation, and provides some optimization tips for demanding applications.

ValueObservation is triggered by database transactions that may modify the tracked value.

For example, if you track the maximum score of players, all transactions that impact the score column of the player database table (any update, insertion, or deletion) trigger the observation, even if the maximum score itself is not changed.

You can filter out undesired duplicate notifications with the removeDuplicates method.

ValueObservation can create database contention. In other words, active observations take a toll on the constrained database resources. When triggered by impactful transactions, observations fetch fresh values, and can delay read and write database accesses of other application components.

When needed, you can help GRDB optimize observations and reduce database contention:

  1. :bulb: Tip: Stop observations when possible.

    For example, if a UIViewController needs to display database values, it can start the observation in viewWillAppear, and stop it in viewWillDisappear. Check the sample code above.

  2. :bulb: Tip: Share observations when possible.

    Each call to ValueObservation.start method triggers independent values refreshes. When several components of your app are interested in the same value, consider sharing the observation.

  3. :bulb: Tip: Use a database pool, because it can perform multi-threaded database accesses.

  4. :bulb: Tip: When the observation processes some raw fetched values, use the map operator:

    // Plain observation
    let observation = ValueObservation.tracking { db -> MyValue in
        let players = try Player.fetchAll(db)
        return computeMyValue(players)
    }
    
    // Optimized observation
    let observation = ValueObservation
        .tracking { db try Player.fetchAll(db) }
        .map { players in computeMyValue(players) }
    

    The map operator helps reducing database contention because it performs its job without blocking concurrent database reads.

  5. :bulb: Tip: When the observation tracks a constant database region, create an optimized observation with the ValueObservation.trackingConstantRegion(_:) method.

    The optimization only kicks in when the observation is started from a database pool: fresh values are fetched concurrently, and do not block database writes.

    The ValueObservation.trackingConstantRegion(_:) has a precondition: the observed requests must fetch from a single and constant database region. The tracked region is made of tables, columns, and, when possible, rowids of individual rows. All changes that happen outside of this region do not impact the observation.

    For example:

    // Tracks the full 'player' table (only)
    let observation = ValueObservation.trackingConstantRegion { db -> [Player] in
        try Player.fetchAll(db)
    }
    
    // Tracks the row with id 42 in the 'player' table (only)
    let observation = ValueObservation.trackingConstantRegion { db -> Player? in
        try Player.fetchOne(db, id: 42)
    }
    
    // Tracks the 'score' column in the 'player' table (only)
    let observation = ValueObservation.trackingConstantRegion { db -> Int? in
        try Player.select(max(Column("score"))).fetchOne(db)
    }
    
    // Tracks both the 'player' and 'team' tables (only)
    let observation = ValueObservation.trackingConstantRegion { db -> ([Team], [Player]) in
        let teams = try Team.fetchAll(db)
        let players = try Player.fetchAll(db)
        return (teams, players)
    }
    

    When you want to observe a varying database region, make sure you use the plain ValueObservation.tracking(_:) method instead, or else some changes will not be notified.

    For example, consider those three observations below that depend on some user preference. They all track a varying region, and must use ValueObservation.tracking(_:):

    // Does not always track the same row in the player table.
    let observation = ValueObservation.tracking { db -> Player? in
        let pref = try Preference.fetchOne(db) ?? .default
        return try Player.fetchOne(db, id: pref.favoritePlayerId)
    }
    
    // Only tracks the 'user' table if there are some blocked emails.
    let observation = ValueObservation.tracking { db -> [User] in
        let pref = try Preference.fetchOne(db) ?? .default
        let blockedEmails = pref.blockedEmails
        return try User.filter(blockedEmails.contains(Column("email"))).fetchAll(db)
    }
    
    // Sometimes tracks the 'food' table, and sometimes the 'beverage' table.
    let observation = ValueObservation.tracking { db -> Int in
        let pref = try Preference.fetchOne(db) ?? .default
        switch pref.selection {
        case .food: return try Food.fetchCount(db)
        case .beverage: return try Beverage.fetchCount(db)
        }
    }
    

    When you are in doubt, add the print() method to your observation before starting it, and look in your application logs for lines that start with tracked region. Make sure the printed database region covers the changes you expect to be tracked.

    Examples of tracked regions

- `empty`: The empty region, which tracks nothing and never triggers the observation.
- `player(*)`: The full `player` table
- `player(id,name)`: The `id` and `name` columns of the `player` table
- `player(id,name)[1]`: The `id` and `name` columns of the row with id 1 in the `player` table
- `player(*),preference(*)`: Both the full `player` and `preference` tables

</details>

DatabaseRegionObservation

DatabaseRegionObservation tracks changes in database requests, and notifies each impactful transaction.

Tracked changes are insertions, updates, and deletions that impact the tracked requests, performed with the query interface, or raw SQL. This includes indirect changes triggered by foreign keys actions or SQL triggers.

:point_up: Note: Some changes are not notified: changes to internal system tables (such as sqlite_master), and changes to WITHOUT ROWID tables.

DatabaseRegionObservation calls your application right after changes have been committed in the database, and before any other thread had any opportunity to perform further changes. This is a pretty strong guarantee, that most applications do not really need. Instead, most applications prefer to be notified with fresh values: make sure you check ValueObservation before using DatabaseRegionObservation.

DatabaseRegionObservation Usage

Define an observation by providing one or several requests to track:

// Track all players
let observation = DatabaseRegionObservation(tracking: Player.all())

Then start the observation from a database queue or pool:

let observer = try observation.start(in: dbQueue) { (db: Database) in
    print("Players were changed")
}

And enjoy the changes notifications:

try dbQueue.write { db in
    try Player(name: "Arthur").insert(db)
}
// Prints "Players were changed"

By default, the observation lasts until the observer returned by the start method is deinitialized. See DatabaseRegionObservation.extent for more details.

You can also feed DatabaseRegionObservation with DatabaseRegion, or any type which conforms to the DatabaseRegionConvertible protocol. For example:

// Observe the full database
let observation = DatabaseRegionObservation(tracking: DatabaseRegion.fullDatabase)
let observer = try observation.start(in: dbQueue) { (db: Database) in
    print("Database was changed")
}

DatabaseRegionObservation.extent

The extent property lets you specify the duration of the observation. See Observation Extent for more details:

// This observation lasts until the database connection is closed
var observation = DatabaseRegionObservation...
observation.extent = .databaseLifetime
_ = try observation.start(in: dbQueue) { db in ... }

The default extent is .observerLifetime: the observation stops when the observer returned by start is deinitialized.

Regardless of the extent of an observation, you can always stop observation with the remove(transactionObserver:) method:

// Start
let observer = try observation.start(in: dbQueue) { db in ... }

// Stop
dbQueue.remove(transactionObserver: observer)

TransactionObserver Protocol

The TransactionObserver protocol lets you observe individual database changes and transactions:

protocol TransactionObserver : class {
    /// Notifies a database change:
    /// - event.kind (insert, update, or delete)
    /// - event.tableName
    /// - event.rowID
    ///
    /// For performance reasons, the event is only valid for the duration of
    /// this method call. If you need to keep it longer, store a copy:
    /// event.copy().
    func databaseDidChange(with event: DatabaseEvent)

    /// Filters the database changes that should be notified to the
    /// `databaseDidChange(with:)` method.
    func observes(eventsOfKind eventKind: DatabaseEventKind) -> Bool

    /// An opportunity to rollback pending changes by throwing an error.
    func databaseWillCommit() throws

    /// Database changes have been committed.
    func databaseDidCommit(_ db: Database)

    /// Database changes have been rollbacked.
    func databaseDidRollback(_ db: Database)
}

Activate a Transaction Observer

To activate a transaction observer, add it to the database queue or pool:

let observer = MyObserver()
dbQueue.add(transactionObserver: observer)

By default, database holds weak references to its transaction observers: they are not retained, and stop getting notifications after they are deinitialized. See Observation Extent for more options.

Database Changes And Transactions

A transaction observer is notified of all database changes: inserts, updates and deletes. This includes indirect changes triggered by ON DELETE and ON UPDATE actions associated to foreign keys, and SQL triggers.

:point_up: Note: Some changes are not notified: changes to internal system tables (such as sqlite_master), changes to WITHOUT ROWID tables, and the deletion of duplicate rows triggered by ON CONFLICT REPLACE clauses (this last exception might change in a future release of SQLite).

:point_up: Note: Transactions performed during read-only database accesses are not notified.

Notified changes are not actually written to disk until the transaction commits, and the databaseDidCommit callback is called. On the other side, databaseDidRollback confirms their invalidation:

try dbQueue.write { db in
    try db.execute(sql: "INSERT ...") // 1. didChange
    try db.execute(sql: "UPDATE ...") // 2. didChange
}                                     // 3. willCommit, 4. didCommit

try dbQueue.inTransaction { db in
    try db.execute(sql: "INSERT ...") // 1. didChange
    try db.execute(sql: "UPDATE ...") // 2. didChange
    return .rollback                  // 3. didRollback
}

try dbQueue.write { db in
    try db.execute(sql: "INSERT ...") // 1. didChange
    throw SomeError()
}                                     // 2. didRollback

Database statements that are executed outside of any transaction do not drop off the radar:

try dbQueue.inDatabase { db in
    try db.execute(sql: "INSERT ...") // 1. didChange, 2. willCommit, 3. didCommit
    try db.execute(sql: "UPDATE ...") // 4. didChange, 5. willCommit, 6. didCommit
}

Changes that are on hold because of a savepoint are only notified after the savepoint has been released. This makes sure that notified events are only events that have an opportunity to be committed:

try dbQueue.inTransaction { db in
    try db.execute(sql: "INSERT ...")            // 1. didChange

    try db.execute(sql: "SAVEPOINT foo")
    try db.execute(sql: "UPDATE ...")            // delayed
    try db.execute(sql: "UPDATE ...")            // delayed
    try db.execute(sql: "RELEASE SAVEPOINT foo") // 2. didChange, 3. didChange

    try db.execute(sql: "SAVEPOINT foo")
    try db.execute(sql: "UPDATE ...")            // not notified
    try db.execute(sql: "ROLLBACK TO SAVEPOINT foo")

    return .commit                               // 4. willCommit, 5. didCommit
}

Eventual errors thrown from databaseWillCommit are exposed to the application code:

do {
    try dbQueue.inTransaction { db in
        ...
        return .commit           // 1. willCommit (throws), 2. didRollback
    }
} catch {
    // 3. The error thrown by the transaction observer.
}

:point_up: Note: all callbacks are called in a protected dispatch queue, and serialized with all database updates.

:point_up: Note: the databaseDidChange(with:) and databaseWillCommit() callbacks must not touch the SQLite database. This limitation does not apply to databaseDidCommit and databaseDidRollback which can use their database argument.

DatabaseRegionObservation and ValueObservation are based on the TransactionObserver protocol.

See also TableChangeObserver.swift, which shows a transaction observer that notifies of modified database tables with NSNotificationCenter.

Filtering Database Events

Transaction observers can avoid being notified of database changes they are not interested in.

The filtering happens in the observes(eventsOfKind:) method, which tells whether the observer wants notification of specific kinds of changes, or not. For example, here is how an observer can focus on the changes that happen on the "player” database table:

class PlayerObserver: TransactionObserver {
    func observes(eventsOfKind eventKind: DatabaseEventKind) -> Bool {
        // Only observe changes to the "player" table.
        return eventKind.tableName == "player"
    }

    func databaseDidChange(with event: DatabaseEvent) {
        // This method is only called for changes that happen to
        // the "player" table.
    }
}

Generally speaking, the observes(eventsOfKind:) method can distinguish insertions from deletions and updates, and is also able to inspect the columns that are about to be changed:

class PlayerScoreObserver: TransactionObserver {
    func observes(eventsOfKind eventKind: DatabaseEventKind) -> Bool {
        // Only observe changes to the "score" column of the "player" table.
        switch eventKind {
        case .insert(let tableName):
            return tableName == "player"
        case .delete(let tableName):
            return tableName == "player"
        case .update(let tableName, let columnNames):
            return tableName == "player" && columnNames.contains("score")
        }
    }
}

When the observes(eventsOfKind:) method returns false for all event kinds, the observer is still notified of commits and rollbacks (except during read-only database accesses):

class PureTransactionObserver: TransactionObserver {
    func observes(eventsOfKind eventKind: DatabaseEventKind) -> Bool {
        // Ignore all individual changes
        return false
    }

    func databaseDidChange(with event: DatabaseEvent) { /* Never called */ }
    func databaseWillCommit() throws { /* Called before commit */ }
    func databaseDidRollback(_ db: Database) { /* Called on rollback */ }
    func databaseDidCommit(_ db: Database) { /* Called on commit */ }
}

For more information about event filtering, see DatabaseRegion.

Observation Extent

You can specify how long an observer is notified of database changes and transactions.

The remove(transactionObserver:) method explicitly stops notifications, at any time:

// From a database queue or pool:
dbQueue.remove(transactionObserver: observer)

// From a database connection:
dbQueue.inDatabase { db in
    db.remove(transactionObserver: observer)
}

Alternatively, use the extent parameter of the add(transactionObserver:extent:) method:

let observer = MyObserver()

// On a database queue or pool:
dbQueue.add(transactionObserver: observer) // default extent
dbQueue.add(transactionObserver: observer, extent: .observerLifetime)
dbQueue.add(transactionObserver: observer, extent: .nextTransaction)
dbQueue.add(transactionObserver: observer, extent: .databaseLifetime)

// On a database connection:
dbQueue.inDatabase { db in
    db.add(transactionObserver: ...)
}
  • The default extent is .observerLifetime: the database holds a weak reference to the observer, and the observation automatically ends when the observer is deinitialized. Meanwhile, observer is notified of all changes and transactions.

  • .nextTransaction activates the observer until the current or next transaction completes. The database keeps a strong reference to the observer until its databaseDidCommit or databaseDidRollback method is eventually called. Hereafter the observer won’t get any further notification.

  • .databaseLifetime has the database retain and notify the observer until the database connection is closed.

Finally, an observer may ignore all database changes until the end of the current transaction:

class PlayerObserver: TransactionObserver {
    var playerTableWasModified = false

    func observes(eventsOfKind eventKind: DatabaseEventKind) -> Bool {
        eventKind.tableName == "player"
    }

    func databaseDidChange(with event: DatabaseEvent) {
        playerTableWasModified = true

        // It is pointless to keep on tracking further changes:
        stopObservingDatabaseChangesUntilNextTransaction()
    }
}

After stopObservingDatabaseChangesUntilNextTransaction(), the databaseDidChange(with:) method will not be notified of any change for the remaining duration of the current transaction. This helps GRDB optimize database observation.

DatabaseRegion

A DatabaseRegion is a reunion of database tables, and combination of columns and rows (identified by their rowid):

|Table1 |   |Table2 |   |Table3 |   |Table4 |   |Table5 |
|-------|   |-------|   |-------|   |-------|   |-------|
|x|x|x|x|   |x| | | |   |x|x|x|x|   |x|x| |x|   | | | | |
|x|x|x|x|   |x| | | |   | | | | |   | | | | |   | |x| | |
|x|x|x|x|   |x| | | |   | | | | |   |x|x| |x|   | | | | |
|x|x|x|x|   |x| | | |   | | | | |   | | | | |   | | | | |

DatabaseRegion helps ValueObservation and DatabaseRegionObservation track changes in the database through the TransactionObserver protocol.

Note that observing a database region spots potential changes, not actual changes in the results of a request. A change is notified if and only if a statement has actually modified the tracked tables and columns by inserting, updating, or deleting a row.

For example, if you observe the region of Player.select(max(Column("score"))), then you’ll get be notified of all changes performed on the score column of the player table (updates, insertions and deletions), even if they do not modify the value of the maximum score. However, you will not get any notification for changes performed on other database tables, or updates to other columns of the player table.

For more details, see the reference.

The DatabaseRegionConvertible Protocol

DatabaseRegionConvertible is a protocol for all types that can turn into a DatabaseRegion:

protocol DatabaseRegionConvertible {
    func databaseRegion(_ db: Database) throws -> DatabaseRegion
}

All requests adopt this protocol, and this allows them to be observed with DatabaseRegionObservation and ValueObservation.

For example:

// An observation triggered by all changes to the database
DatabaseRegionObservation(tracking: .fullDatabase)

// An observation triggered by all changes to the player table
DatabaseRegionObservation(tracking: Table("player"))

// An observation triggered by all changes to the row with rowid 1 in the player table
DatabaseRegionObservation(tracking: Player.filter(id: 1))

// An observation triggered by all changes to the score column of the player table
DatabaseRegionObservation(tracking: SQLRequest("SELECT score FROM player"))

Note that specifying a region as a request does not run the request. In the above example, Player.filter(id: 1) and SELECT score FROM player are never executed. They are only compiled by SQLite, so that GRDB understands the tables, rows, and columns that constitute the database region.

Support for SQLite Pre-Update Hooks

When SQLite is built with the SQLITE_ENABLE_PREUPDATE_HOOK option, TransactionObserverType gets an extra callback which lets you observe individual column values in the rows modified by a transaction:

protocol TransactionObserverType : class {
    #if SQLITE_ENABLE_PREUPDATE_HOOK
    /// Notifies before a database change (insert, update, or delete)
    /// with change information (initial / final values for the row's
    /// columns).
    ///
    /// The event is only valid for the duration of this method call. If you
    /// need to keep it longer, store a copy: event.copy().
    func databaseWillChange(with event: DatabasePreUpdateEvent)
    #endif
}

This extra API can be activated in two ways:

  1. Use the GRDB.swift CocoaPod with a custom compilation option, as below. It uses the system SQLite, which is compiled with SQLITE_ENABLE_PREUPDATE_HOOK support, but only on iOS 11.0+ (we don’t know the minimum version of macOS, tvOS, watchOS):

    pod 'GRDB.swift'
    platform :ios, '11.0' # or above
    
    post_install do |installer|
      installer.pods_project.targets.select { |target| target.name == "GRDB.swift" }.each do |target|
        target.build_configurations.each do |config|
          # Enable extra GRDB APIs
          config.build_settings['OTHER_SWIFT_FLAGS'] = "$(inherited) -D SQLITE_ENABLE_PREUPDATE_HOOK"
          # Enable extra SQLite APIs
          config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] = "$(inherited) GRDB_SQLITE_ENABLE_PREUPDATE_HOOK=1"
        end
      end
    end
    

    :warning: Warning: make sure you use the right platform version! You will get runtime errors on devices with a lower version.

    :point_up: Note: the GRDB_SQLITE_ENABLE_PREUPDATE_HOOK=1 option in GCC_PREPROCESSOR_DEFINITIONS defines some C function prototypes that are lacking from the system <sqlite3.h> header. When Xcode eventually ships with an SDK that includes a complete header, you may get a compiler error about duplicate function definitions. When this happens, just remove this GRDB_SQLITE_ENABLE_PREUPDATE_HOOK=1 option.

  2. Use a custom SQLite build and activate the SQLITE_ENABLE_PREUPDATE_HOOK compilation option.

Encryption

GRDB can encrypt your database with SQLCipher v3.4+.

Use CocoaPods, and specify in your Podfile:

# GRDB with SQLCipher 4
pod 'GRDB.swift/SQLCipher'
pod 'SQLCipher', '~> 4.0'

# GRDB with SQLCipher 3
pod 'GRDB.swift/SQLCipher'
pod 'SQLCipher', '~> 3.4'

Make sure you remove any existing pod 'GRDB.swift' from your Podfile. GRDB.swift/SQLCipher must be the only active GRDB pod in your whole project, or you will face linker or runtime errors, due to the conflicts between SQLCipher and the system SQLite.

Creating or Opening an Encrypted Database

You create and open an encrypted database by providing a passphrase to your database connection:

var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase("secret")
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

It is also in prepareDatabase that you perform other SQLCipher configuration steps that must happen early in the lifetime of a SQLCipher connection. For example:

var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase("secret")
    try db.execute(sql: "PRAGMA cipher_page_size = ...")
    try db.execute(sql: "PRAGMA kdf_iter = ...")
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

When you want to open an existing SQLCipher 3 database with SQLCipher 4, you may want to run the cipher_compatibility pragma:

// Open an SQLCipher 3 database with SQLCipher 4
var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase("secret")
    try db.execute(sql: "PRAGMA cipher_compatibility = 3")
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

See SQLCipher 4.0.0 Release and Upgrading to SQLCipher 4 for more information.

Changing the Passphrase of an Encrypted Database

You can change the passphrase of an already encrypted database.

When you use a database queue, open the database with the old passphrase, and then apply the new passphrase:

try dbQueue.write { db in
    try db.changePassphrase("newSecret")
}

When you use a database pool, make sure that no concurrent read can happen by changing the passphrase within the barrierWriteWithoutTransaction block. You must also ensure all future reads open a new database connection by calling the invalidateReadOnlyConnections method:

try dbPool.barrierWriteWithoutTransaction { db in
    try db.changePassphrase("newSecret")
    dbPool.invalidateReadOnlyConnections()
}

:point_up: Note: When an application wants to keep on using a database queue or pool after the passphrase has changed, it is responsible for providing the correct passphrase to the usePassphrase method called in the database preparation function. Consider:

// WRONG: this won't work across a passphrase change
let passphrase = try getPassphrase()
var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase(passphrase)
}

// CORRECT: get the latest passphrase when it is needed
var config = Configuration()
config.prepareDatabase { db in
    let passphrase = try getPassphrase()
    try db.usePassphrase(passphrase)
}

:point_up: Note: The DatabasePool.barrierWriteWithoutTransaction method does not prevent database snapshots from accessing the database during the passphrase change, or after the new passphrase has been applied to the database. Those database accesses may throw errors. Applications should provide their own mechanism for invalidating open snapshots before the passphrase is changed.

:point_up: Note: Instead of changing the passphrase “in place” as described here, you can also export the database in a new encrypted database that uses the new passphrase. See Exporting a Database to an Encrypted Database.

Exporting a Database to an Encrypted Database

Providing a passphrase won’t encrypt a clear-text database that already exists, though. SQLCipher can’t do that, and you will get an error instead: SQLite error 26: file is encrypted or is not a database.

Instead, create a new encrypted database, at a distinct location, and export the content of the existing database. This can both encrypt a clear-text database, or change the passphrase of an encrypted database.

The technique to do that is documented by SQLCipher.

With GRDB, it gives:

// The existing database
let existingDBQueue = try DatabaseQueue(path: "/path/to/existing.db")

// The new encrypted database, at some distinct location:
var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase("secret")
}
let newDBQueue = try DatabaseQueue(path: "/path/to/new.db", configuration: config)

try existingDBQueue.inDatabase { db in
    try db.execute(
        sql: """
            ATTACH DATABASE ? AS encrypted KEY ?;
            SELECT sqlcipher_export('encrypted');
            DETACH DATABASE encrypted;
            """,
        arguments: [newDBQueue.path, "secret"])
}

// Now the export is completed, and the existing database can be deleted.

Security Considerations

Managing the lifetime of the passphrase string

It is recommended to avoid keeping the passphrase in memory longer than necessary. To do this, make sure you load the passphrase from the prepareDatabase method:

// NOT RECOMMENDED: this keeps the passphrase in memory longer than necessary
let passphrase = try getPassphrase()
var config = Configuration()
config.prepareDatabase { db in
    try db.usePassphrase(passphrase)
}

// RECOMMENDED: only load the passphrase when it is needed
var config = Configuration()
config.prepareDatabase { db in
    let passphrase = try getPassphrase()
    try db.usePassphrase(passphrase)
}

This technique helps manages the lifetime of the passphrase, although keep in mind that the content of a String may remain intact in memory long after the object has been released.

For even better control over the lifetime of the passphrase in memory, use a Data object which natively provides the resetBytes function.

// RECOMMENDED: only load the passphrase when it is needed and reset its content immediately after use
var config = Configuration()
config.prepareDatabase { db in
    let passphrase = try getPassphraseData() // Data
    defer {
        passphrase.resetBytes(in: 0..<data.count)
    }
    try db.usePassphrase(passphrase)
}

Some demanding users will want to go further, and manage the lifetime of the raw passphrase bytes. See below.

Managing the lifetime of the passphrase bytes

GRDB offers convenience methods for providing the database passphrases as Swift strings: usePassphrase(_:) and changePassphrase(_:). Those methods don’t keep the passphrase String in memory longer than necessary. But they are as secure as the standard String type: the lifetime of actual passphrase bytes in memory is not under control.

When you want to precisely manage the passphrase bytes, talk directly to SQLCipher, using its raw C functions.

For example:

var config = Configuration()
config.prepareDatabase { db in
    ... // Carefully load passphrase bytes
    let code = sqlite3_key(db.sqliteConnection, /* passphrase bytes */)
    ... // Carefully dispose passphrase bytes
    guard code == SQLITE_OK else {
        throw DatabaseError(
            resultCode: ResultCode(rawValue: code), 
            message: db.lastErrorMessage)
    }
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

Passphrase availability vs. Database availability

When the passphrase is securely stored in the system keychain, your application can protect it using the kSecAttrAccessible attribute.

Such protection prevents GRDB from creating SQLite connections when the passphrase is not available:

var config = Configuration()
config.prepareDatabase { db in
    let passphrase = try loadPassphraseFromSystemKeychain()
    try db.usePassphrase(passphrase)
}

// Success if and only if the passphrase is available
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

For the same reason, database pools, which open SQLite connections on demand, may fail at any time as soon as the passphrase becomes unavailable:

// Success if and only if the passphrase is available
let dbPool = try DatabasePool(path: dbPath, configuration: config)

// May fail if passphrase has turned unavailable
try dbPool.read { ... }

// May trigger value observation failure if passphrase has turned unavailable
try dbPool.write { ... }

Because DatabasePool maintains a pool of long-lived SQLite connections, some database accesses will use an existing connection, and succeed. And some other database accesses will fail, as soon as the pool wants to open a new connection. It is impossible to predict which accesses will succeed or fail.

For the same reason, a database queue, which also maintains a long-lived SQLite connection, will remain available even after the passphrase has turned unavailable.

Applications are thus responsible for protecting database accesses when the passphrase is unavailable. To this end, they can use Data Protection. They can also destroy their instances of database queue or pool when the passphrase becomes unavailable.

Backup

You can backup (copy) a database into another.

Backups can for example help you copying an in-memory database to and from a database file when you implement NSDocument subclasses.

let source: DatabaseQueue = ...      // or DatabasePool
let destination: DatabaseQueue = ... // or DatabasePool
try source.backup(to: destination)

The backup method blocks the current thread until the destination database contains the same contents as the source database.

When the source is a database pool, concurrent writes can happen during the backup. Those writes may, or may not, be reflected in the backup, but they won’t trigger any error.

Database has an analogous backup method.

let source: DatabaseQueue = ...      // or DatabasePool
let destination: DatabaseQueue = ... // or DatabasePool
try source.write { sourceDb in
    try destination.barrierWriteWithoutTransaction { destDb in
        try sourceDb.backup(to: destDb)
    }
}

This method allows for the choice of source and destination Database handles with which to backup the database.

Backup Progress Reporting

The backup methods take optional pagesPerStep and progress parameters. Together these parameters can be used to track a database backup in progress and abort an incomplete backup.

When pagesPerStep is provided, the database backup is performed in steps. At each step, no more than pagesPerStep database pages are copied from the source to the destination. The backup proceeds one step at a time until all pages have been copied.

When a progress callback is provided, progress is called after every backup step, including the last. Even if a non-default pagesPerStep is specified or the backup is otherwise completed in a single step, the progress callback will be called.

try source.backup(
    to: destination,
    pagesPerStep: ...)
    { backupProgress in
       print("Database backup progress:", backupProgress)
    }

Aborting an Incomplete Backup

If a call to progress throws when backupProgress.isComplete == false, the backup will be aborted and the error rethrown. However, if a call to progress throws when backupProgress.isComplete == true, the backup is unaffected and the error is silently ignored.

:warning: Warning: Passing non-default values of pagesPerStep or progress to the backup methods is an advanced API intended to provide additional capabilities to expert users. GRDB’s backup API provides a faithful, low-level wrapper to the underlying SQLite online backup API. GRDB’s documentation is not a comprehensive substitute for the official SQLite documentation of their backup API.

Interrupt a Database

The interrupt() method causes any pending database operation to abort and return at its earliest opportunity.

It can be called from any thread.

dbQueue.interrupt()
dbPool.interrupt()

A call to interrupt() that occurs when there are no running SQL statements is a no-op and has no effect on SQL statements that are started after interrupt() returns.

A database operation that is interrupted will throw a DatabaseError with code SQLITE_INTERRUPT. If the interrupted SQL operation is an INSERT, UPDATE, or DELETE that is inside an explicit transaction, then the entire transaction will be rolled back automatically. If the rolled back transaction was started by a transaction-wrapping method such as DatabaseWriter.write or Database.inTransaction, then all database accesses will throw a DatabaseError with code SQLITE_ABORT until the wrapping method returns.

For example:

try dbQueue.write { db in
    try Player(...).insert(db)     // throws SQLITE_INTERRUPT
    try Player(...).insert(db)     // not executed
}                                  // throws SQLITE_INTERRUPT

try dbQueue.write { db in
    do {
        try Player(...).insert(db) // throws SQLITE_INTERRUPT
    } catch { }
}                                  // throws SQLITE_ABORT

try dbQueue.write { db in
    do {
        try Player(...).insert(db) // throws SQLITE_INTERRUPT
    } catch { }
    try Player(...).insert(db)     // throws SQLITE_ABORT
}                                  // throws SQLITE_ABORT

You can catch both SQLITE_INTERRUPT and SQLITE_ABORT errors:

do {
    try dbPool.write { db in ... }
} catch DatabaseError.SQLITE_INTERRUPT, DatabaseError.SQLITE_ABORT {
    // Oops, the database was interrupted.
}

For more information, see Interrupt A Long-Running Query.

Avoiding SQL Injection

SQL injection is a technique that lets an attacker nuke your database.

XKCD: Exploits of a Mom

https://xkcd.com/327/

Here is an example of code that is vulnerable to SQL injection:

// BAD BAD BAD
let id = 1
let name = textField.text
try dbQueue.write { db in
    try db.execute(sql: "UPDATE students SET name = '\(name)' WHERE id = \(id)")
}

If the user enters a funny string like Robert'; DROP TABLE students; --, SQLite will see the following SQL, and drop your database table instead of updating a name as intended:

UPDATE students SET name = 'Robert';
DROP TABLE students;
--' WHERE id = 1

To avoid those problems, never embed raw values in your SQL queries. The only correct technique is to provide arguments to your raw SQL queries:

let name = textField.text
try dbQueue.write { db in
    // Good
    try db.execute(
        sql: "UPDATE students SET name = ? WHERE id = ?",
        arguments: [name, id])

    // Just as good
    try db.execute(
        sql: "UPDATE students SET name = :name WHERE id = :id",
        arguments: ["name": name, "id": id])
}

When you use records and the query interface, GRDB always prevents SQL injection for you:

let id = 1
let name = textField.text
try dbQueue.write { db in
    if var student = try Student.fetchOne(db, id: id) {
        student.name = name
        try student.update(db)
    }
}

Error Handling

GRDB can throw DatabaseError, PersistenceError, or crash your program with a fatal error.

Considering that a local database is not some JSON loaded from a remote server, GRDB focuses on trusted databases. Dealing with untrusted databases requires extra care.

DatabaseError

DatabaseError are thrown on SQLite errors:

do {
    try Pet(masterId: 1, name: "Bobby").insert(db)
} catch let error as DatabaseError {
    // The SQLite error code: 19 (SQLITE_CONSTRAINT)
    error.resultCode

    // The extended error code: 787 (SQLITE_CONSTRAINT_FOREIGNKEY)
    error.extendedResultCode

    // The eventual SQLite message: FOREIGN KEY constraint failed
    error.message

    // The eventual erroneous SQL query
    // "INSERT INTO pet (masterId, name) VALUES (?, ?)"
    error.sql

    // The eventual SQL arguments
    // [1, "Bobby"]
    error.arguments

    // Full error description
    // > SQLite error 19: FOREIGN KEY constraint failed -
    // > while executing `INSERT INTO pet (masterId, name) VALUES (?, ?)`
    error.description
}

If you want to see statement arguments in the error description, make statement arguments public.

SQLite uses results codes to distinguish between various errors.

You can catch a DatabaseError and match on result codes:

do {
    try ...
} catch let error as DatabaseError {
    switch error {
    case DatabaseError.SQLITE_CONSTRAINT_FOREIGNKEY:
        // foreign key constraint error
    case DatabaseError.SQLITE_CONSTRAINT:
        // any other constraint error
    default:
        // any other database error
    }
}

You can also directly match errors on result codes:

do {
    try ...
} catch DatabaseError.SQLITE_CONSTRAINT_FOREIGNKEY {
    // foreign key constraint error
} catch DatabaseError.SQLITE_CONSTRAINT {
    // any other constraint error
} catch {
    // any other database error
}

Each DatabaseError has two codes: an extendedResultCode (see extended result code), and a less precise resultCode (see primary result code). Extended result codes are refinements of primary result codes, as SQLITE_CONSTRAINT_FOREIGNKEY is to SQLITE_CONSTRAINT, for example.

:warning: Warning: SQLite has progressively introduced extended result codes across its versions. The SQLite release notes are unfortunately not quite clear about that: write your handling of extended result codes with care.

PersistenceError

PersistenceError is thrown by the PersistableRecord protocol, in a single case: when the update method could not find any row to update:

do {
    try player.update(db)
} catch let PersistenceError.recordNotFound(databaseTableName: table, key: key) {
    print("Key \(key) was not found in table \(table).")
}

Fatal Errors

Fatal errors notify that the program, or the database, has to be changed.

They uncover programmer errors, false assumptions, and prevent misuses. Here are a few examples:

  • The code asks for a non-optional value, when the database contains NULL:

    // fatal error: could not convert NULL to String.
    let name: String = row["name"]
    

    Solution: fix the contents of the database, use NOT NULL constraints, or load an optional:

    let name: String? = row["name"]
    
  • Conversion from database value to Swift type fails:

    // fatal error: could not convert "Mom’s birthday" to Date.
    let date: Date = row["date"]
    
    // fatal error: could not convert "" to URL.
    let url: URL = row["url"]
    

    Solution: fix the contents of the database, or use DatabaseValue to handle all possible cases:

    let dbValue: DatabaseValue = row["date"]
    if dbValue.isNull {
        // Handle NULL
    } else if let date = Date.fromDatabaseValue(dbValue) {
        // Handle valid date
    } else {
        // Handle invalid date
    }
    
  • The database can’t guarantee that the code does what it says:

    // fatal error: table player has no unique index on column email
    try Player.deleteOne(db, key: ["email": "arthur@example.com"])
    

    Solution: add a unique index to the player.email column, or use the deleteAll method to make it clear that you may delete more than one row:

    try Player.filter(Column("email") == "arthur@example.com").deleteAll(db)
    
  • Database connections are not reentrant:

    // fatal error: Database methods are not reentrant.
    dbQueue.write { db in
        dbQueue.write { db in
            ...
        }
    }
    

    Solution: avoid reentrancy, and instead pass a database connection along.

How to Deal with Untrusted Inputs

Let’s consider the code below:

let sql = "SELECT ..."

// Some untrusted arguments for the query
let arguments: [String: Any] = ...
let rows = try Row.fetchCursor(db, sql: sql, arguments: StatementArguments(arguments))

while let row = try rows.next() {
    // Some untrusted database value:
    let date: Date? = row[0]
}

It has two opportunities to throw fatal errors:

  • Untrusted arguments: The dictionary may contain values that do not conform to the DatabaseValueConvertible protocol, or may miss keys required by the statement.
  • Untrusted database content: The row may contain a non-null value that can’t be turned into a date.

In such a situation, you can still avoid fatal errors by exposing and handling each failure point, one level down in the GRDB API:

// Untrusted arguments
if let arguments = StatementArguments(arguments) {
    let statement = try db.makeStatement(sql: sql)
    try statement.setArguments(arguments)

    var cursor = try Row.fetchCursor(statement)
    while let row = try iterator.next() {
        // Untrusted database content
        let dbValue: DatabaseValue = row[0]
        if dbValue.isNull {
            // Handle NULL
        if let date = Date.fromDatabaseValue(dbValue) {
            // Handle valid date
        } else {
            // Handle invalid date
        }
    }
}

See prepared statements and DatabaseValue for more information.

Error Log

SQLite can be configured to invoke a callback function containing an error code and a terse error message whenever anomalies occur.

This global error callback must be configured early in the lifetime of your application:

Database.logError = { (resultCode, message) in
    NSLog("%@", "SQLite error \(resultCode): \(message)")
}

:warning: Warning: Database.logError must be set before any database connection is opened. This includes the connections that your application opens with GRDB, but also connections opened by other tools, such as third-party libraries. Setting it after a connection has been opened is an SQLite misuse, and has no effect.

See The Error And Warning Log for more information.

Unicode

SQLite lets you store unicode strings in the database.

However, SQLite does not provide any unicode-aware string transformations or comparisons.

Unicode functions

The UPPER and LOWER built-in SQLite functions are not unicode-aware:

// "JéRôME"
try String.fetchOne(db, sql: "SELECT UPPER('Jérôme')")

GRDB extends SQLite with SQL functions that call the Swift built-in string functions capitalized, lowercased, uppercased, localizedCapitalized, localizedLowercased and localizedUppercased:

// "JÉRÔME"
let uppercased = DatabaseFunction.uppercase
try String.fetchOne(db, sql: "SELECT \(uppercased.name)('Jérôme')")

Those unicode-aware string functions are also readily available in the query interface:

Player.select(nameColumn.uppercased)

String Comparison

SQLite compares strings in many occasions: when you sort rows according to a string column, or when you use a comparison operator such as = and <=.

The comparison result comes from a collating function, or collation. SQLite comes with three built-in collations that do not support Unicode: binary, nocase, and rtrim.

GRDB comes with five extra collations that leverage unicode-aware comparisons based on the standard Swift String comparison functions and operators:

  • unicodeCompare (uses the built-in <= and == Swift operators)
  • caseInsensitiveCompare
  • localizedCaseInsensitiveCompare
  • localizedCompare
  • localizedStandardCompare

A collation can be applied to a table column. All comparisons involving this column will then automatically trigger the comparison function:

try db.create(table: "player") { t in
    // Guarantees case-insensitive email unicity
    t.column("email", .text).unique().collate(.nocase)

    // Sort names in a localized case insensitive way
    t.column("name", .text).collate(.localizedCaseInsensitiveCompare)
}

// Players are sorted in a localized case insensitive way:
let players = try Player.order(nameColumn).fetchAll(db)

:warning: Warning: SQLite requires host applications to provide the definition of any collation other than binary, nocase and rtrim. When a database file has to be shared or migrated to another SQLite library of platform (such as the Android version of your application), make sure you provide a compatible collation.

If you can’t or don’t want to define the comparison behavior of a column (see warning above), you can still use an explicit collation in SQL requests and in the query interface:

let collation = DatabaseCollation.localizedCaseInsensitiveCompare
let players = try Player.fetchAll(db,
    sql: "SELECT * FROM player ORDER BY name COLLATE \(collation.name))")
let players = try Player.order(nameColumn.collating(collation)).fetchAll(db)

You can also define your own collations:

let collation = DatabaseCollation("customCollation") { (lhs, rhs) -> NSComparisonResult in
    // return the comparison of lhs and rhs strings.
}

// Make the collation available to a database connection
var config = Configuration()
config.prepareDatabase { db in
    db.add(collation: collation)
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

Memory Management

Both SQLite and GRDB use non-essential memory that help them perform better.

You can reclaim this memory with the releaseMemory method:

// Release as much memory as possible.
dbQueue.releaseMemory()
dbPool.releaseMemory()

This method blocks the current thread until all current database accesses are completed, and the memory collected.

:warning: Warning: If DatabasePool.releaseMemory() is called while a long read is performed concurrently, then no other read access will be possible until this long read has completed, and the memory has been released. If this does not suit your application needs, look for the asynchronous options below:

You can release memory in an asynchronous way as well:

// On a DatabaseQueue
dbQueue.asyncWriteWithoutTransaction { db in
    db.releaseMemory()
}

// On a DatabasePool
dbPool.releaseMemoryEventually()

DatabasePool.releaseMemoryEventually() does not block the current thread, and does not prevent concurrent database accesses. In exchange for this convenience, you don’t know when memory has been freed.

Memory Management on iOS

The iOS operating system likes applications that do not consume much memory.

Database queues and pools automatically free non-essential memory when the application receives a memory warning, and when the application enters background.

You can opt out of this automatic memory management:

var config = Configuration()
config.automaticMemoryManagement = false
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config) // or DatabasePool

Data Protection

Data Protection lets you protect files so that they are encrypted and unavailable until the device is unlocked.

Data protection can be enabled globally for all files created by an application.

You can also explicitly protect a database, by configuring its enclosing directory. This will not only protect the database file, but also all temporary files created by SQLite (including the persistent .shm and .wal files created by database pools).

For example, to explicitly use complete protection:

// Paths
let fileManager = FileManager.default
let directoryURL = try fileManager
    .url(for: .applicationSupportDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
    .appendingPathComponent("database", isDirectory: true)
let databaseURL = directoryURL.appendingPathComponent("db.sqlite")

// Create directory if needed
var isDirectory: ObjCBool = false
if !fileManager.fileExists(atPath: directoryURL.path, isDirectory: &isDirectory) {
    try fileManager.createDirectory(atPath: directoryURL.path, withIntermediateDirectories: false)
} else if !isDirectory.boolValue {
    throw NSError(domain: NSCocoaErrorDomain, code: NSFileWriteFileExistsError, userInfo: nil)
}

// Enable data protection
try fileManager.setAttributes([.protectionKey : FileProtectionType.complete], ofItemAtPath: directoryURL.path)

// Open database
let dbQueue = try DatabaseQueue(path: databaseURL.path)

When a database is protected, an application that runs in the background on a locked device won’t be able to read or write from it. Instead, it will get DatabaseError with code SQLITE_IOERR (10) “disk I/O error”, or SQLITE_AUTH (23) “not authorized”.

You can catch those errors and wait for UIApplicationDelegate.applicationProtectedDataDidBecomeAvailable(_:) or UIApplicationProtectedDataDidBecomeAvailable notification in order to retry the failed database operation.

FAQ

FAQ: Opening Connections

FAQ: SQL

FAQ: General

FAQ: Associations

FAQ: ValueObservation

FAQ: Errors

FAQ: Opening Connections

How do I create a database in my application?

This question assumes that your application has to create a new database from scratch. If your app has to open an existing database that is embedded inside your application as a resource, see How do I open a database stored as a resource of my application? instead.

The database has to be stored in a valid place where it can be created and modified. For example, in the Application Support directory:

let databaseURL = try FileManager.default
    .url(for: .applicationSupportDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
    .appendingPathComponent("db.sqlite")
let dbQueue = try DatabaseQueue(path: databaseURL.path)

How do I open a database stored as a resource of my application?

If your application does not need to modify the database, open a read-only connection to your resource:

var config = Configuration()
config.readonly = true
let dbPath = Bundle.main.path(forResource: "db", ofType: "sqlite")!
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

If the application should modify the database, you need to copy it to a place where it can be modified. For example, in the Application Support directory. Only then, open a connection:

let fileManager = FileManager.default
let dbPath = try fileManager
    .url(for: .applicationSupportDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
    .appendingPathComponent("db.sqlite")
    .path
if !fileManager.fileExists(atPath: dbPath) {
    let dbResourcePath = Bundle.main.path(forResource: "db", ofType: "sqlite")!
    try fileManager.copyItem(atPath: dbResourcePath, toPath: dbPath)
}
let dbQueue = try DatabaseQueue(path: dbPath)

How do I close a database connection?

Database connections are automatically closed when they are deinitialized.

When the correct execution of your program depends on precise database closing, use the close() method:

try dbQueue.close()

This explicit close() may fail with an error. See the inline documentation of this method for more information. Generally speaking, you should not call this method: rely on automatic closing instead.

FAQ: SQL

How do I print a request as SQL?

When you want to debug a request that does not deliver the expected results, you may want to print the SQL that is actually executed.

You can compile the request into a prepared statement:

try dbQueue.read { db in
    let request = Player.filter(Column("email") == "arthur@example.com")
    let statement = try request.makePreparedRequest(db).statement
    print(statement) // SELECT * FROM player WHERE email = ?
    print(statement.arguments) // ["arthur@example.com"]
}

Another option is to setup a tracing function that prints out the executed SQL requests. For example, provide a tracing function when you connect to the database:

// Prints all SQL statements
var config = Configuration()
config.prepareDatabase { db in
    db.trace { print($0) }
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

try dbQueue.read { db in
    // Prints "SELECT * FROM player WHERE email = ?"
    let players = try Player.filter(Column("email") == "arthur@example.com").fetchAll(db)
}

If you want to see statement arguments such as 'arthur@example.com' in the logged statements, make statement arguments public.

:point_up: Note: the generated SQL may change between GRDB releases, without notice: don’t have your application rely on any specific SQL output.

FAQ: General

How do I monitor the duration of database statements execution?

Use the trace(options:_:) method, with the .profile option:

var config = Configuration()
config.prepareDatabase { db in
    db.trace(options: .profile) { event in
        // Prints all SQL statements with their duration
        print(event)

        // Access to detailed profiling information
        if case let .profile(statement, duration) = event, duration > 0.5 {
            print("Slow query: \(statement.sql)")
        }
    }
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

try dbQueue.read { db in
    let players = try Player.filter(Column("email") == "arthur@example.com").fetchAll(db)
    // Prints "0.003s SELECT * FROM player WHERE email = ?"
}

If you want to see statement arguments such as 'arthur@example.com' in the logged statements, make statement arguments public.

What Are Experimental Features?

Since GRDB 1.0, all backwards compatibility guarantees of semantic versioning apply: no breaking change will happen until the next major version of the library.

There is an exception, though: experimental features, marked with the “:fire: EXPERIMENTAL” badge. Those are advanced features that are too young, or lack user feedback. They are not stabilized yet.

Those experimental features are not protected by semantic versioning, and may break between two minor releases of the library. To help them becoming stable, your feedback is greatly appreciated.

Does GRDB support library evolution and ABI stability?

No, GRDB does not support library evolution and ABI stability. The only promise is API stability according to semantic versioning, with an exception for experimental features.

Yet, GRDB can be built with the “Build Libraries for Distribution” Xcode option (BUILD_LIBRARY_FOR_DISTRIBUTION), so that you can build binary frameworks at your convenience.

FAQ: Associations

How do I filter records and only keep those that are associated to another record?

Let’s say you have two record types, Book and Author, and you want to only fetch books that have an author, and discard anonymous books.

We start by defining the association between books and authors:

struct Book: TableRecord {
    ...
    static let author = belongsTo(Author.self)
}

struct Author: TableRecord {
    ...
}

And then we can write our request and only fetch books that have an author, discarding anonymous ones:

let books: [Book] = try dbQueue.read { db in
    // SELECT book.* FROM book 
    // JOIN author ON author.id = book.authorID
    let request = Book.joining(required: Book.author)
    return try request.fetchAll(db)
}

Note how this request does not use the filter method. Indeed, we don’t have any condition to express on any column. Instead, we just need to “require that a book can be joined to its author”.

See How do I filter records and only keep those that are NOT associated to another record? below for the opposite question.

How do I filter records and only keep those that are NOT associated to another record?

Let’s say you have two record types, Book and Author, and you want to only fetch anonymous books that do not have any author.

We start by defining the association between books and authors:

struct Book: TableRecord {
    ...
    static let author = belongsTo(Author.self)
}

struct Author: TableRecord {
    ...
}

And then we can write our request and only fetch anonymous books that don’t have any author:

let books: [Book] = try dbQueue.read { db in
    // SELECT book.* FROM book
    // LEFT JOIN author ON author.id = book.authorID
    // WHERE author.id IS NULL
    let authorAlias = TableAlias()
    let request = Book
        .joining(optional: Book.author.aliased(authorAlias))
        .filter(!authorAlias.exists)
    return try request.fetchAll(db)
}

This request uses a TableAlias in order to be able to filter on the eventual associated author. We make sure that the Author.primaryKey is nil, which is another way to say it does not exist: the book has no author.

See How do I filter records and only keep those that are associated to another record? above for the opposite question.

How do I select only one column of an associated record?

Let’s say you have two record types, Book and Author, and you want to fetch all books with their author name, but not the full associated author records.

We start by defining the association between books and authors:

struct Book: Decodable, TableRecord {
    ...
    static let author = belongsTo(Author.self)
}

struct Author: Decodable, TableRecord {
    ...
    enum Columns {
        static let name = Column(CodingKeys.name)
    }
}

And then we can write our request and the ad-hoc record that decodes it:

struct BookInfo: Decodable, FetchableRecord {
    var book: Book
    var authorName: String? // nil when the book is anonymous

    static func all() -> QueryInterfaceRequest<BookInfo> {
        // SELECT book.*, author.name AS authorName
        // FROM book
        // LEFT JOIN author ON author.id = book.authorID
        let authorName = Author.Columns.name.forKey(CodingKeys.authorName)
        return Book
            .annotated(withOptional: Book.author.select(authorName))
            .asRequest(of: BookInfo.self)
    }
}

let bookInfos: [BookInfo] = try dbQueue.read { db in
    BookInfo.all().fetchAll(db)
}

By defining the request as a static method of BookInfo, you have access to the private CodingKeys.authorName, and a compiler-checked SQL column name.

By using the annotated(withOptional:) method, you append the author name to the top-level selection that can be decoded by the ad-hoc record.

By using asRequest(of:), you enhance the type-safety of your request.

FAQ: ValueObservation

Why is ValueObservation not publishing value changes?

Sometimes it looks that a ValueObservation does not notify the changes you expect.

There may be four possible reasons for this:

  1. The expected changes were not committed into the database.
  2. The expected changes were committed into the database, but were quickly overwritten.
  3. The observation was stopped.
  4. The observation does not track the expected database region.

To answer the first two questions, look at SQL statements executed by the database. This is done when you open the database connection:

// Prints all SQL statements
var config = Configuration()
config.prepareDatabase { db in
    db.trace { print("SQL: \($0)") }
}
let dbQueue = try DatabaseQueue(path: dbPath, configuration: config)

If, after that, you are convinced that the expected changes were committed into the database, and not overwritten soon after, trace observation events:

let observation = ValueObservation
    .tracking { db in ... }
    .print() // <- trace observation events
let cancellable = observation.start(...)

Look at the observation logs which start with cancel or failure: maybe the observation was cancelled by your app, or did fail with an error.

Look at the observation logs which start with value: make sure, again, that the expected value was not actually notified, then overwritten.

Finally, look at the observation logs which start with tracked region. Does the printed database region cover the expected changes?

For example:

  • empty: The empty region, which tracks nothing and never triggers the observation.
  • player(*): The full player table
  • player(id,name): The id and name columns of the player table
  • player(id,name)[1]: The id and name columns of the row with id 1 in the player table
  • player(*),team(*): Both the full player and team tables

If you happen to use the ValueObservation.trackingConstantRegion(_:) method and see a mismatch between the tracked region and your expectation, then change the definition of your observation by using tracking(_:). You should witness that the logs which start with tracked region now evolve in order to include the expected changes, and that you get the expected notifications.

If after all those steps (thanks you!), your observation is still failing you, please open an issue and provide a minimal reproducible example!

FAQ: Errors

Generic parameter ‘T’ could not be inferred

You may get this error when using the read and write methods of database queues and pools:

// Generic parameter 'T' could not be inferred
let string = try dbQueue.read { db in
    let result = try String.fetchOne(db, ...)
    return result
}

This is a limitation of the Swift compiler.

The general workaround is to explicitly declare the type of the closure result:

// General Workaround
let string = try dbQueue.read { db -> String? in
    let result = try String.fetchOne(db, ...)
    return result
}

You can also, when possible, write a single-line closure:

// Single-line closure workaround:
let string = try dbQueue.read { db in
    try String.fetchOne(db, ...)
}

Mutation of captured var in concurrently-executing code

The insert and save persistence methods can trigger a compiler error in async contexts:

var player = Player(id: nil, name: "Arthur")
try await dbWriter.write { db in
    // Error: Mutation of captured var 'player' in concurrently-executing code
    try player.insert(db)
}
print(player.id) // A non-nil id

When this happens, prefer the inserted and saved methods instead:

// OK
var player = Player(id: nil, name: "Arthur")
player = try await dbWriter.write { [player] db in
    return try player.inserted(db)
}
print(player.id) // A non-nil id

SQLite error 1 “no such column”

This error message is self-explanatory: do check for misspelled or non-existing column names.

However, sometimes this error only happens when an app runs on a recent operating system (iOS 14+, Big Sur+, etc.) The error does not happen with previous ones.

When this is the case, there are two possible explanations:

  1. Maybe a column name is really misspelled or missing from the database schema.

    To find it, check the SQL statement that comes with the DatabaseError.

  2. Maybe the application is using the character " instead of the single quote ' as the delimiter for string literals in raw SQL queries. Recent versions of SQLite have learned to tell about this deviation from the SQL standard, and this is why you are seeing this error.

    For example: this is not standard SQL: UPDATE player SET name = "Arthur".

    The standard version is: UPDATE player SET name = 'Arthur'.

    It just happens that old versions of SQLite used to accept the former, non-standard version. Newer versions are able to reject it with an error.

    The fix is to change the SQL statements run by the application: replace " with ' in your string literals.

    It may also be time to learn about statement arguments and SQL injection:

    let name: String = ...
    
    // NOT STANDARD (double quote)
    try db.execute(sql: """
        UPDATE player SET name = "\(name)"
        """)
    
    // STANDARD, BUT STILL NOT RECOMMENDED (single quote)
    try db.execute(sql: "UPDATE player SET name = '\(name)'")
    
    // STANDARD, AND RECOMMENDED (statement arguments)
    try db.execute(sql: "UPDATE player SET name = ?", arguments: [name])
    

For more information, see Double-quoted String Literals Are Accepted, and Configuration.acceptsDoubleQuotedStringLiterals.

SQLite error 10 “disk I/O error”, SQLite error 23 “not authorized”

Those errors may be the sign that SQLite can’t access the database due to data protection.

When your application should be able to run in the background on a locked device, it has to catch this error, and, for example, wait for UIApplicationDelegate.applicationProtectedDataDidBecomeAvailable(_:) or UIApplicationProtectedDataDidBecomeAvailable notification and retry the failed database operation.

do {
    try ...
} catch DatabaseError.SQLITE_IOERR, DatabaseError.SQLITE_AUTH {
    // Handle possible data protection error
}

This error can also be prevented altogether by using a more relaxed file protection.

SQLite error 21 “wrong number of statement arguments” with LIKE queries

You may get the error “wrong number of statement arguments” when executing a LIKE query similar to:

let name = textField.text
let players = try dbQueue.read { db in
    try Player.fetchAll(db, sql: "SELECT * FROM player WHERE name LIKE '%?%'", arguments: [name])
}

The problem lies in the '%?%' pattern.

SQLite only interprets ? as a parameter when it is a placeholder for a whole value (int, double, string, blob, null). In this incorrect query, ? is just a character in the '%?%' string: it is not a query parameter, and is not processed in any way. See https://www.sqlite.org/lang_expr.html#varparam for more information about SQLite parameters.

To fix the error, you can feed the request with the pattern itself, instead of the name:

let name = textField.text
let players: [Player] = try dbQueue.read { db in
    let pattern = "%\(name)%"
    return try Player.fetchAll(db, sql: "SELECT * FROM player WHERE name LIKE ?", arguments: [pattern])
}

Sample Code


Thanks


URIs don’t change: people change them.

Changes Tracking

This chapter has been renamed Record Comparison.

Customized Decoding of Database Rows

This chapter has been renamed Beyond FetchableRecord.

Dealing with External Connections

This chapter has been superseded by the Sharing a Database guide.

Enabling FTS5 Support

This chapter has moved.

FetchedRecordsController

FetchedRecordsController has been removed in GRDB 5.

The Database Observation chapter describes the other ways to observe the database.

This chapter has moved.

Migrations

This chapter has moved.

NSNumber and NSDecimalNumber

This chapter has moved

Persistable Protocol

This protocol has been renamed PersistableRecord in GRDB 3.0.

RowConvertible Protocol

This protocol has been renamed FetchableRecord in GRDB 3.0.

TableMapping Protocol

This protocol has been renamed TableRecord in GRDB 3.0.

ValueObservation and DatabaseRegionObservation

This chapter has been superseded by ValueObservation and DatabaseRegionObservation.

Concurrency

This chapter has moved.

Guarantees and Rules

This chapter is now split into Concurrency Rules and Safe and Unsafe Database Accesses.

Differences between Database Queues and Pools

This chapter has moved.

Advanced DatabasePool

This chapter has moved.

Database Snapshots

This chapter has moved.

DatabaseWriter and DatabaseReader Protocols

This chapter was removed. See the references of DatabaseReader and DatabaseWriter.

Asynchronous APIs

This chapter has moved.

Unsafe Concurrency APIs

This chapter has moved.

Adding support for missing SQL functions or operators

This chapter was renamed to Embedding SQL in Query Interface Requests.