SoloDB Documentation
SoloDB is an embedded document database for .NET that stores your objects as JSON documents inside SQLite. It gives you the flexibility of a NoSQL database with the reliability of SQLite, all without running a separate server.
When to Use SoloDB
SoloDB is ideal for:
- Desktop applications that need local data persistence
- Mobile apps via .NET MAUI
- Small to medium web applications that don't need distributed databases
- Prototyping when you want to store objects without defining schemas
- Embedded systems where you need a reliable, self-contained database
Key Characteristics
- Documents stored as SQLite JSONB - binary JSON for efficient storage and querying
- Full LINQ support with compile-time type safety
- ACID transactions inherited from SQLite
- Thread-safe with built-in connection pooling
- Zero configuration - just create the database and start using it
Installation
Install from NuGet:
dotnet add package SoloDBRequirements
- .NET Standard 2.0 or 2.1 (compatible with .NET Framework 4.6.1+, .NET Core 2.0+, .NET 5+)
- Works on Windows, Linux, and macOS
First Steps
Let's store and retrieve some data. First, define a class for your data:
public class User
{
public long Id { get; set; } // This becomes the primary key
public string Name { get; set; }
public string Email { get; set; }
public DateTime CreatedAt { get; set; }
}Now create a database, get a collection, and perform operations:
using SoloDatabase;
// Create or open a database file
using var db = new SoloDB("myapp.db");
// Get a typed collection - creates it automatically if it doesn't exist
var users = db.GetCollection<User>();
// Insert a document
var user = new User
{
Name = "Alice",
Email = "alice@example.com",
CreatedAt = DateTime.UtcNow
};
users.Insert(user);
// user.Id is now set to the auto-generated value (e.g., 1)
// Query with LINQ
var found = users.FirstOrDefault(u => u.Email == "alice@example.com");
// Update
found.Name = "Alice Smith";
users.Update(found);
// Delete
users.Delete(found.Id);Note: The using statement ensures the database connection is properly closed when done. In long-running applications, you typically create one SoloDB instance and reuse it throughout the application's lifetime.
How Data is Stored
Understanding how SoloDB stores your data helps you design better models and write efficient queries.
The Storage Model
Each collection is a SQLite table with two columns:
Id- INTEGER PRIMARY KEY (auto-incremented by default)Value- JSONB containing your serialized object
When you insert this object:
var user = new User { Name = "Alice", Email = "alice@example.com" };
users.Insert(user);SoloDB creates a row like this:
// Conceptually:
// Id: 1
// Value: {"Name":"Alice","Email":"alice@example.com","CreatedAt":"..."}JSONB Format
SoloDB uses SQLite's native JSONB (binary JSON) format, which means:
- Queries can use SQLite's JSON functions for efficient filtering
- No JSON parsing overhead on every read - binary format is faster
- You can even query the data using raw SQL with JSON functions
Serialization Rules
Understanding what gets serialized is crucial for designing your data models correctly.
Classes: Only Public Properties
For classes, SoloDB serializes public instance properties with getters. Fields are ignored.
public class Example
{
// SERIALIZED - public property with getter
public string Name { get; set; }
// SERIALIZED - public property (getter required, setter for deserialization)
public int Age { get; set; }
// NOT SERIALIZED - field (even if public)
public string PublicField;
// NOT SERIALIZED - private property
private string Secret { get; set; }
// NOT SERIALIZED - internal property
internal string Internal { get; set; }
}Important: For deserialization, properties need a public setter. Read-only properties can be serialized but won't be populated when loading from the database.
Structs: Fields and Properties
Structs behave differently - both public fields AND public properties are serialized:
public struct Point
{
public int X; // SERIALIZED - public field on struct
public int Y; // SERIALIZED - public field on struct
public double Distance { get; set; } // SERIALIZED - public property
}Supported Types
SoloDB's built-in serializer handles these types natively:
| Primitives | int, long, float, double, decimal, bool, char, byte, etc. |
| Strings | string (null-safe) |
| Date/Time | DateTime, DateTimeOffset, TimeSpan |
| GUIDs | Guid |
| Collections | Arrays, List<T>, Dictionary<K,V>, HashSet<T>, Queue<T>, Stack<T> |
| Nullable | Nullable<T> (e.g., int?, DateTime?) |
| Tuples | ValueTuple, Tuple |
| F# Types | F# records, discriminated unions, F# lists |
| Binary | byte[] (stored as Base64) |
| Nested Objects | Any class/struct following these rules |
Nested Objects
Objects can contain other objects to any depth:
public class Order
{
public long Id { get; set; }
public Customer Customer { get; set; } // Nested object
public List<OrderItem> Items { get; set; } // List of objects
public Address ShippingAddress { get; set; } // Another nested object
}
public class OrderItem
{
public string ProductName { get; set; }
public int Quantity { get; set; }
public decimal Price { get; set; }
}What to Avoid
- Circular references - will cause stack overflow
- Very deep nesting - impacts performance and query complexity
- Storing huge binary data - use the FileSystem API instead
Custom JSON Serializer
SoloDB uses its own high-performance JSON serializer instead of Newtonsoft.Json or System.Text.Json. The serializer is designed specifically for document database use cases:
- Generic type caching - Serializers are generated once per type and cached for subsequent use
- Polymorphic support - Automatically adds
$typediscriminator for non-sealed types when needed - F# native support - Records, discriminated unions, and F# lists are handled natively
- No external dependencies - Self-contained implementation with no JSON library dependencies
The serializer converts objects to an internal JsonValue representation which is then stored as SQLite JSONB. Deserialization reads the JSONB and maps it back to your types using the same rules described above.
ID Generation
Every document needs a unique identifier. SoloDB provides flexible options for ID handling.
Default: Auto-Increment Long
The simplest approach - name a property Id with type long:
public class Product
{
public long Id { get; set; } // Auto-detected as primary key
public string Name { get; set; }
}
var products = db.GetCollection<Product>();
var product = new Product { Name = "Widget" };
products.Insert(product);
// product.Id is now 1, 2, 3, etc.Custom ID with [SoloId] Attribute
For other ID types or custom generation logic, use the [SoloId] attribute with a custom generator:
using SoloDatabase.Attributes;
// Define a custom ID generator that produces string IDs from GUIDs
public class StringGuidIdGenerator : IIdGenerator<Document>
{
public object GenerateId(ISoloDBCollection<Document> collection, Document item)
{
return Guid.NewGuid().ToString("N"); // Returns string like "a1b2c3d4..."
}
public bool IsEmpty(object id)
{
return string.IsNullOrEmpty(id as string);
}
}
// Use it in your model
public class Document
{
[SoloId(typeof(StringGuidIdGenerator))]
public string Id { get; set; } // String ID, not Guid
public string Title { get; set; }
public string Content { get; set; }
}Supported ID Types
long | Default, auto-incremented by SQLite |
int | Auto-incremented (cast from SQLite's int64) |
Guid | Requires a generator (e.g., one that calls Guid.NewGuid()) |
string | Must be provided by your generator or set before insert |
Note: For long and int ID types without a custom generator, SQLite handles auto-incrementing automatically. You don't need to implement a generator for these common cases.
Custom Guid ID Generator Example
Here's a simple generator for Guid IDs:
public class GuidIdGenerator : IIdGenerator<MyDocument>
{
public object GenerateId(ISoloDBCollection<MyDocument> collection, MyDocument item)
{
return Guid.NewGuid();
}
public bool IsEmpty(object id) => id is Guid g && g == Guid.Empty;
}Working with Collections
Collections are containers for your documents, similar to tables in SQL databases.
Getting a Collection
// Typed collection - name derived from type (recommended)
var users = db.GetCollection<User>(); // Collection name: "User"
// Custom name - useful for multiple collections of same type
var activeUsers = db.GetCollection<User>("ActiveUsers");
var archivedUsers = db.GetCollection<User>("ArchivedUsers");
// Untyped collection for dynamic scenarios
var untypedCollection = db.GetUntypedCollection("MyData");Collection Lifecycle
- Collections are created automatically when first accessed
- The underlying SQLite table is created with the proper schema
- Indexes defined via attributes are created on first access
Reserved names: Collection names starting with SoloDB are reserved for internal use and will throw an ArgumentException. For example, "SoloDBUsers" is not allowed, but "MyUsers" or "UsersSoloDB" are fine.
// Check if a collection exists
bool exists = db.CollectionExists<User>();
bool existsByName = db.CollectionExists("User");
// Drop a collection (deletes all data!)
db.DropCollection<User>();
db.DropCollection("ArchivedUsers");CRUD Operations
Insert
var users = db.GetCollection<User>();
// Single insert - returns the generated ID
var user = new User { Name = "Alice", Email = "alice@example.com" };
long id = users.Insert(user);
// user.Id is also set to the same value
// Batch insert - much faster for multiple items
var newUsers = new List<User>
{
new User { Name = "Bob", Email = "bob@example.com" },
new User { Name = "Charlie", Email = "charlie@example.com" }
};
IList<long> ids = users.InsertBatch(newUsers);Insert or Replace (Upsert)
When you have unique indexes, you can upsert based on those constraints:
// If a user with this email exists (assuming unique index), replace it
var user = new User { Name = "Alice Updated", Email = "alice@example.com" };
users.InsertOrReplace(user);
// Batch version
users.InsertOrReplaceBatch(manyUsers);Read
// By ID (throws KeyNotFoundException if not found)
User user = users.GetById(1);
// By ID - returns F# Option
var userOption = users.TryGetById(1);
if (userOption.IsSome()) // Note: In C#, add FSharpOption extensions for .IsSome()
{
// Access the value through the Option
}
// By custom ID type
Document doc = documents.GetById<string>("doc-abc-123");
// All documents as a list
var allUsers = users.ToList();Update
// Full document update
var user = users.GetById(1);
user.Name = "New Name";
user.Email = "new@email.com";
users.Update(user); // Replaces entire document
// Replace matching documents
users.ReplaceOne(u => u.Email == "old@example.com", newUserData);
users.ReplaceMany(u => u.Status == "pending", templateUser);Note: Methods ending in One (like ReplaceOne, DeleteOne) affect only one document. If multiple documents match the filter, which one is affected is determined by SQLite's internal ordering and may appear random. Use these methods only when you expect exactly one match, or when you don't care which matching document is affected.
Partial Updates with UpdateMany
For efficient partial updates without loading the full document. This is significantly faster than loading documents with GetById, modifying them, and calling Update, because it executes a single SQL statement instead of multiple round-trips:
// Set a single field
int count = users.UpdateMany(
u => u.Id <= 10, // Filter
u => u.IsActive.Set(true) // Update action
);
// Set multiple fields at once
users.UpdateMany(
u => u.Status == "pending",
u => u.Status.Set("approved"),
u => u.ApprovedAt.Set(DateTime.UtcNow),
u => u.ApprovedBy.Set("admin")
);
// Append to a collection property
users.UpdateMany(
u => u.Id == userId,
u => u.Tags.Append("verified")
);Delete
// By ID - returns count of deleted (0 or 1)
int deleted = users.Delete(1);
// By custom ID
documents.Delete<string>("doc-abc-123");
// By predicate
users.DeleteOne(u => u.Email == "old@example.com"); // First match only
users.DeleteMany(u => u.IsActive == false); // All matchesQuerying with LINQ
SoloDB collections implement IQueryable<T>, giving you full LINQ support. Queries are translated to SQL and executed on SQLite.
Filtering
// Where clause
var activeUsers = users.Where(u => u.IsActive).ToList();
// Multiple conditions
var thirtyDaysAgo = DateTime.UtcNow.AddDays(-30);
var results = users.Where(u =>
u.IsActive &&
u.CreatedAt > thirtyDaysAgo &&
u.Email.Contains("@company.com")
).ToList();Single Item Queries
// First match (throws if none)
var first = users.First(u => u.Email == "admin@example.com");
// First or default (returns null if none)
var admin = users.FirstOrDefault(u => u.Role == "Admin");
// Single (throws if not exactly one)
var unique = users.Single(u => u.Username == "johndoe");
// Check existence
bool hasAdmins = users.Any(u => u.Role == "Admin");
bool allActive = users.All(u => u.IsActive);Ordering and Pagination
// Order by
var sorted = users
.OrderBy(u => u.Name)
.ThenByDescending(u => u.CreatedAt)
.ToList();
// Pagination
int page = 2;
int pageSize = 20;
var pageResults = users
.OrderBy(u => u.Id)
.Skip((page - 1) * pageSize)
.Take(pageSize)
.ToList();Projections
// Select specific properties
var emails = users.Select(u => u.Email).ToList();
// Project to anonymous type
var summaries = users.Select(u => new
{
u.Name,
u.Email,
DaysSinceCreated = (DateTime.UtcNow - u.CreatedAt).Days
}).ToList();
// Project to DTO
var dtos = users.Select(u => new UserDto
{
FullName = u.Name,
ContactEmail = u.Email
}).ToList();Aggregates
int totalUsers = users.Count();
int activeCount = users.Count(u => u.IsActive);
long total = users.LongCount();
// Note: Min, Max, Sum, Average are supported on numeric projections
var maxId = users.Max(u => u.Id);String Operations
// Contains, StartsWith, EndsWith
var results = users.Where(u =>
u.Name.Contains("john") ||
u.Email.StartsWith("admin") ||
u.Email.EndsWith("@company.com")
).ToList();
// SQL LIKE pattern (via extension)
var pattern = users.Where(u => u.Name.Like("J%n")).ToList();Performance note: StartsWith can use indexes for faster lookups (translated to >= 'prefix' AND < 'next' comparisons, which SQLite optimizes efficiently). However, EndsWith and Contains cannot use indexes and require a full table scan.
Array/Collection Queries
// Query nested arrays
var tagged = users.Where(u => u.Tags.Contains("premium")).ToList();
// Check if any element matches
var oneWeekAgo = DateTime.UtcNow.AddDays(-7);
var withRecentOrders = users.Where(u =>
u.Orders.Any(o => o.Date > oneWeekAgo)
).ToList();Indexing
Indexes dramatically improve query performance for filtered and sorted operations. Without an index, SoloDB must scan every document.
Attribute-Based Indexes
The easiest way - add [Indexed] to properties you frequently query:
using SoloDatabase.Attributes;
public class Product
{
public long Id { get; set; } // Always indexed (primary key)
[Indexed(unique: true)] // Unique index - no duplicates allowed
public string SKU { get; set; }
[Indexed] // Non-unique index
public string Category { get; set; }
[Indexed]
public decimal Price { get; set; }
public string Description { get; set; } // Not indexed
}Indexes are automatically created when the collection is first accessed.
When to Index
- DO index: Properties used in
Whereclauses,OrderBy, and unique constraints - DON'T index: Properties rarely queried, or only used in
Selectprojections - Consider trade-offs: Indexes speed up reads but slow down writes slightly, and increase the database file size on disk/memory
Programmatic Indexes
var products = db.GetCollection<Product>();
// Create a non-unique index
products.EnsureIndex(p => p.Category);
// Create a unique index
products.EnsureUniqueAndIndex(p => p.Email);
// Remove an index
products.DropIndexIfExists(p => p.Category);
// Ensure all attribute-defined indexes exist
products.EnsureAddedAttributeIndexes();Note: If you add new [Indexed] attributes to your model classes after the database already exists, the indexes won't be created automatically until you call EnsureAddedAttributeIndexes(). Indexes are only auto-created on first collection access.
Unique Constraint Violations
Inserting a duplicate value for a unique index throws SqliteException:
try
{
products.Insert(new Product { SKU = "EXISTING-SKU" });
}
catch (Microsoft.Data.Sqlite.SqliteException ex)
when (ex.Message.Contains("UNIQUE"))
{
Console.WriteLine("SKU already exists!");
}Transactions
For operations that must succeed or fail together, use transactions. If any exception occurs, all changes are automatically rolled back.
Basic Transaction
db.WithTransaction(tx =>
{
var accounts = tx.GetCollection<Account>();
var from = accounts.GetById(fromAccountId);
var to = accounts.GetById(toAccountId);
if (from.Balance < amount)
throw new InvalidOperationException("Insufficient funds");
from.Balance -= amount;
to.Balance += amount;
accounts.Update(from);
accounts.Update(to);
});
// If we get here, transaction committed successfullyTransaction with Return Value
var orderId = db.WithTransaction(tx =>
{
var orders = tx.GetCollection<Order>();
var inventory = tx.GetCollection<InventoryItem>();
// Create order and update inventory atomically
var order = new Order { CustomerId = customerId, Total = total };
orders.Insert(order);
foreach (var item in orderItems)
{
var inv = inventory.Single(i => i.ProductId == item.ProductId);
inv.Quantity -= item.Quantity;
inventory.Update(inv);
}
return order.Id;
});Automatic Rollback
try
{
db.WithTransaction(tx =>
{
var users = tx.GetCollection<User>();
users.Insert(new User { Name = "Test" });
// This exception causes automatic rollback
throw new Exception("Something went wrong!");
});
}
catch (Exception)
{
// The user was NOT inserted - transaction rolled back
}Polymorphic Collections
Store different derived types in a single collection and query them by base type or filter by concrete type.
Abstract Base Class
public abstract class Shape
{
public long Id { get; set; }
public string Color { get; set; }
public abstract double CalculateArea();
}
public class Circle : Shape
{
public double Radius { get; set; }
public override double CalculateArea() => Math.PI * Radius * Radius;
}
public class Rectangle : Shape
{
public double Width { get; set; }
public double Height { get; set; }
public override double CalculateArea() => Width * Height;
}Usage
var shapes = db.GetCollection<Shape>();
// Insert different types
shapes.Insert(new Circle { Color = "Red", Radius = 5.0 });
shapes.Insert(new Rectangle { Color = "Blue", Width = 4.0, Height = 6.0 });
// Query all shapes (returns properly typed objects)
var allShapes = shapes.ToList();
// allShapes[0] is Circle, allShapes[1] is Rectangle
// Query by base class properties
var blueShapes = shapes.Where(s => s.Color == "Blue").ToList();
// Filter by concrete type using OfType<T>()
var circles = shapes.OfType<Circle>().ToList();
var largeCircles = shapes.OfType<Circle>()
.Where(c => c.Radius > 3.0)
.ToList();How It Works
SoloDB stores type information in a special $type field in the JSON when the collection is based on an abstract class or interface. This allows correct deserialization back to the original type.
Direct SQL Access
For complex queries or operations not covered by LINQ, access SQLite directly. SoloDB provides a Dapper-like micro-ORM API with high-performance object mapping using compiled expression trees.
Borrowing a Connection
// Borrow a connection from the pool
using var conn = db.Connection.Borrow();Dapper-Like Query API
The borrowed connection provides familiar methods similar to Dapper, with automatic parameter binding and result mapping. Add the following using statement to access these extension methods:
using SoloDatabase.SQLiteTools;// Execute non-query commands (CREATE, INSERT, UPDATE, DELETE)
// Returns number of rows affected
conn.Execute("CREATE TABLE IF NOT EXISTS Logs (Id INTEGER PRIMARY KEY, Message TEXT)");
conn.Execute("INSERT INTO Logs (Message) VALUES (@msg)", new { msg = "Hello" });
// Query multiple rows - returns IEnumerable<T>
var logs = conn.Query<LogEntry>("SELECT * FROM Logs WHERE Id > @id", new { id = 100 });
// Query first row (throws if no results)
var count = conn.QueryFirst<int>("SELECT COUNT(*) FROM Logs");
// Query first row or default (returns null/default if no results)
var log = conn.QueryFirstOrDefault<LogEntry>("SELECT * FROM Logs WHERE Id = @id", new { id = 999 });Object Mapping
The query methods automatically map SQL results to your types. For complex types, SoloDB builds and compiles LINQ expression trees on first use, creating optimized mappers that match column names to property/field names:
// Map to a class
public class LogEntry
{
public long Id { get; set; }
public string Message { get; set; }
}
var logs = conn.Query<LogEntry>("SELECT Id, Message FROM Logs");
// Map to anonymous types
var results = conn.Query<dynamic>("SELECT Id, Message FROM Logs");
// Map to primitives
var ids = conn.Query<long>("SELECT Id FROM Logs");Accessing Collection Data
// Documents are stored as JSONB in the 'Value' column
// Use SQLite's json_extract to query specific fields
var rawUsers = conn.Query<dynamic>(
"SELECT Id, json_extract(Value, '$.Name') as Name FROM User WHERE json_extract(Value, '$.IsActive') = 1"
);File Storage
SoloDB includes a complete hierarchical file storage system stored directly in the database. Files are split into 16KB chunks, compressed using Snappy, and stored in SQLite. This provides:
- Partial reads - Read only what you need without loading the entire file
- Sparse file support - Write at any offset; unwritten areas don't consume space
- Automatic compression - Snappy compression reduces storage size
- Transactional safety - File operations participate in database transactions
- Metadata support - Attach key-value metadata to files and directories
Accessing the FileSystem
var fs = db.FileSystem;Upload and Download
// Upload from a stream
using (var stream = File.OpenRead("report.pdf"))
{
fs.Upload("/documents/reports/2024-q4.pdf", stream);
}
// Download to a stream
using (var output = File.Create("downloaded.pdf"))
{
fs.Download("/documents/reports/2024-q4.pdf", output);
}
// Check existence and delete
bool exists = fs.Exists("/documents/reports/2024-q4.pdf");
fs.DeleteFileAt("/documents/reports/2024-q4.pdf");Stream-Based Access (Like File.Open)
The OpenOrCreateAt method returns a standard Stream that works just like File.Open(). You can use it with any .NET stream API:
// Compare: System.IO file access
using (var fileStream = File.Open("local.txt", FileMode.OpenOrCreate))
{
fileStream.Write(data, 0, data.Length);
fileStream.Position = 0;
fileStream.Read(buffer, 0, buffer.Length);
}
// SoloDB file access - same API!
using (var fileStream = fs.OpenOrCreateAt("/data/log.txt"))
{
fileStream.Write(data, 0, data.Length);
fileStream.Position = 0;
fileStream.Read(buffer, 0, buffer.Length);
}Works with StreamReader/StreamWriter too:
// Write text
using (var stream = fs.OpenOrCreateAt("/logs/app.log"))
using (var writer = new StreamWriter(stream))
{
writer.WriteLine($"[{DateTime.UtcNow}] Application started");
writer.WriteLine($"[{DateTime.UtcNow}] Processing...");
}
// Read text
using (var stream = fs.OpenOrCreateAt("/logs/app.log"))
using (var reader = new StreamReader(stream))
{
string contents = reader.ReadToEnd();
}Random Access (Partial Reads/Writes)
Unlike document storage, FileSystem supports efficient partial access:
// Write at specific offset (creates sparse file if needed)
byte[] data = GetSomeData();
fs.WriteAt("/data/sparse.bin", 1024 * 1024, data); // Write at 1MB offset
// Read from specific offset - doesn't load entire file
byte[] chunk = fs.ReadAt("/data/sparse.bin", 1024 * 1024, data.Length);
// Sparse files: unwritten areas read as zeros, don't consume storage
fs.WriteAt("/sparse.dat", 10_000_000, new byte[] { 1, 2, 3 }); // 10MB offset
// File is NOT 10MB on disk - only the written chunks are storedFile and Directory Metadata
// Set file metadata (key-value pairs)
fs.SetMetadata("/documents/report.pdf", "Author", "Finance Team");
fs.SetMetadata("/documents/report.pdf", "Department", "Accounting");
// Read file info with metadata
var fileInfo = fs.GetAt("/documents/report.pdf");
Console.WriteLine($"Name: {fileInfo.Name}");
Console.WriteLine($"Size: {fileInfo.Length} bytes");
Console.WriteLine($"Created: {fileInfo.Created}");
Console.WriteLine($"Modified: {fileInfo.Modified}");
Console.WriteLine($"Author: {fileInfo.Metadata["Author"]}");
// Delete specific metadata
fs.DeleteMetadata(fileInfo, "Department");
// Directory metadata works the same way
var dir = fs.GetOrCreateDirAt("/documents/archive");
fs.SetDirectoryMetadata(dir, "RetentionPolicy", "7years");
fs.DeleteDirectoryMetadata(dir, "RetentionPolicy");Directory Operations
// Create directory (creates parent directories automatically)
var dir = fs.GetOrCreateDirAt("/documents/archive/2024");
// Get directory info
var dirInfo = fs.GetDirAt("/documents/archive");
// List files in a directory
var files = fs.ListFilesAt("/documents/reports/");
// List subdirectories
var dirs = fs.ListDirectoriesAt("/documents/");
// Recursive listing (files and directories)
var allEntries = fs.RecursiveListEntriesAt("/documents/");
// Lazy recursive listing (memory efficient for large trees)
var lazyEntries = fs.RecursiveListEntriesAtLazy("/");
// Delete directory (must be empty)
fs.DeleteDirAt("/documents/old");Move and Rename Files
// Move/rename a file (throws IOException if destination exists)
fs.MoveFile("/documents/draft.pdf", "/documents/final.pdf");
// Move to different directory
fs.MoveFile("/inbox/file.txt", "/archive/2024/file.txt");
// Move and replace if exists
fs.MoveReplaceFile("/temp/new.pdf", "/documents/report.pdf");Bulk Upload
For uploading many files efficiently in a single transaction:
var files = new List<BulkFileData>
{
new("/logs/app1.log", Encoding.UTF8.GetBytes("Log data 1"), null, null),
new("/logs/app2.log", Encoding.UTF8.GetBytes("Log data 2"), null, null),
new("/images/logo.png", imageBytes, DateTimeOffset.UtcNow, null)
};
fs.UploadBulk(files); // Single transaction for all filesFile Timestamps
Files and directories track Created and Modified timestamps. The Modified timestamp is automatically updated whenever you write to a file or upload new content:
// Modified is automatically updated on writes
fs.WriteAt("/data/file.bin", 0, data); // Modified = now
fs.Upload("/data/file.bin", stream); // Modified = now
// Manually set timestamps when needed
fs.SetFileCreationDate("/archive/old.txt", DateTimeOffset.UtcNow.AddYears(-1));
fs.SetFileModificationDate("/archive/old.txt", DateTimeOffset.UtcNow.AddDays(-30));
// Read timestamps from file info
var info = fs.GetAt("/archive/old.txt");
Console.WriteLine($"Created: {info.Created}");
Console.WriteLine($"Modified: {info.Modified}");Configuration
Database Location
// File-based (persistent)
using var db = new SoloDB("path/to/database.db");
using var db = new SoloDB("./relative/path.db");
using var db = new SoloDB(@"C:\absolute\path.db");
// In-memory (lost when disposed)
using var db = new SoloDB("memory:my-database");
// Shared in-memory (accessible by name within process)
using var db1 = new SoloDB("memory:shared");
using var db2 = new SoloDB("memory:shared"); // Same databaseLong-Running Applications
// Singleton pattern for web apps / services
public static class Database
{
public static SoloDB Instance { get; } = new SoloDB("app.db");
}
// Usage
var users = Database.Instance.GetCollection<User>();Database Maintenance
// Optimize query plans (runs ANALYZE)
db.Optimize();
// Backup to another database
using var backup = new SoloDB("backup.db");
db.BackupTo(backup);
// Vacuum into new file (compacts and defragments)
db.VacuumTo("compacted.db");Note: BackupTo and VacuumTo must use the same storage medium - you cannot backup from a file-based database to an in-memory database or vice versa.
Query Caching
SoloDB caches prepared SQL statements for performance. The internal SoloDBConfiguration type contains a CachingEnabled flag that controls this behavior. You can manage caching through these methods:
// Disable caching (reduces memory, slower repeated queries)
// Sets config.CachingEnabled = false
db.DisableCaching();
// Re-enable caching
// Sets config.CachingEnabled = true
db.EnableCaching();
// Clear the current cache (frees memory, keeps caching enabled)
db.ClearCache();Caching is enabled by default. Disabling it automatically clears any cached commands. This can be useful for memory-constrained environments or when running many unique one-off queries.
Extending LINQ Support
SoloDB's LINQ-to-SQL translator can be extended to handle custom expressions. Two handler lists are available:
using SoloDatabase.QueryTranslator;
// Pre-expression handler: Intercept expressions BEFORE the default translator
// Return true to indicate you've handled the expression
QueryTranslator.preExpressionHandler.Add((queryBuilder, expression) =>
{
// Custom handling for specific expression types
// Return false to let the default translator handle it
return false;
});
// Unknown expression handler: Handle expressions the default translator doesn't support
// Called when no built-in handler matches
QueryTranslator.unknownExpressionHandler.Add((queryBuilder, expression) =>
{
// Handle custom expression types that aren't supported by default
// Throw or return false if you can't handle it
return false;
});Advanced: The preExpressionHandler list runs first for every expression, allowing you to override default behavior. The unknownExpressionHandler list is called only when the built-in translator encounters an unrecognized expression type, providing a fallback mechanism.
Performance Tips
1. Use Indexes on Queried Properties
// Without index: Full table scan
var user = users.FirstOrDefault(u => u.Email == "test@example.com");
// With index: Fast lookup
[Indexed]
public string Email { get; set; }2. Use Batch Operations
// Slow: 1000 individual transactions
foreach (var item in items)
collection.Insert(item);
// Fast: Single transaction
collection.InsertBatch(items);3. Use Transactions for Multiple Operations
// Slow: Each update is a separate transaction
foreach (var user in usersToUpdate)
{
user.LastSeen = DateTime.UtcNow;
users.Update(user);
}
// Fast: Single transaction
db.WithTransaction(tx =>
{
var col = tx.GetCollection<User>();
foreach (var user in usersToUpdate)
{
user.LastSeen = DateTime.UtcNow;
col.Update(user);
}
});4. Use Projections for Large Documents
// Slow: Loads entire documents
var names = users.ToList().Select(u => u.Name);
// Fast: Only fetches Name field
var names = users.Select(u => u.Name).ToList();5. Use UpdateMany for Partial Updates
// Slow: Load, modify, save each document
foreach (var user in users.Where(u => u.NeedsUpdate))
{
var u = users.GetById(user.Id);
u.Status = "updated";
users.Update(u);
}
// Fast: Single SQL UPDATE statement
users.UpdateMany(u => u.NeedsUpdate, u => u.Status.Set("updated"));6. Keep Documents Small
SQLite reads the entire JSONB document when accessing any field. Large documents slow down all operations, even simple queries. For large binary data, use the built-in FileSystem API which supports partial reads:
// Bad: Storing large data in documents
public class Report
{
public long Id { get; set; }
public string Title { get; set; }
public byte[] PdfContent { get; set; } // Large! Loaded on every access
}
// Good: Store large data in FileSystem, reference by path
public class Report
{
public long Id { get; set; }
public string Title { get; set; }
public string PdfPath { get; set; } // e.g., "/reports/2024/report-123.pdf"
}
// Read only what you need from FileSystem
byte[] chunk = fs.ReadAt(report.PdfPath, offset: 0, length: 1024);Benchmark Results vs LiteDB
SoloDB shows strong performance in common operations:
| Insert 10,000 documents | 29% faster than LiteDB |
| Complex LINQ queries | 95% faster than LiteDB |
| GroupBy operations | 57% faster than LiteDB |
| Memory usage | Up to 99% less allocation |
Source: SoloDB vs LiteDB Benchmark
API Reference
SoloDB Class
new SoloDB(string path) | Create/open database at path |
GetCollection<T>() | Get typed collection (name from type) |
GetCollection<T>(string name) | Get typed collection with custom name |
GetUntypedCollection(string name) | Get untyped collection for dynamic use |
CollectionExists<T>() | Check if collection exists |
DropCollection<T>() | Delete collection and all data |
WithTransaction(Action) | Execute in transaction |
WithTransaction<T>(Func) | Execute in transaction with return |
FileSystem | Access file storage API |
Connection | Access connection pool for raw SQL |
Optimize() | Run SQLite ANALYZE |
BackupTo(SoloDB target) | Backup to another database |
VacuumTo(string path) | Compact into new file |
Dispose() | Close database connection |
ISoloDBCollection<T> Interface
Insert(T item) | Insert document, returns ID |
InsertBatch(IEnumerable<T>) | Batch insert, returns IDs |
InsertOrReplace(T item) | Upsert based on unique index |
GetById(long id) | Get by ID (throws if not found) |
GetById<TId>(TId id) | Get by custom ID type |
TryGetById(long id) | Get by ID (returns null/None) |
Update(T item) | Replace entire document |
UpdateMany(filter, transforms) | Partial update matching docs |
ReplaceOne(filter, item) | Replace first match |
ReplaceMany(filter, item) | Replace all matches |
Delete(long id) | Delete by ID |
DeleteOne(filter) | Delete first match |
DeleteMany(filter) | Delete all matches |
EnsureIndex(expression) | Create non-unique index |
EnsureUniqueAndIndex(expression) | Create unique index |
DropIndexIfExists(expression) | Remove index |
Attributes
[Indexed] | Create non-unique index on property |
[Indexed(unique: true)] | Create unique index on property |
[SoloId(typeof(Generator))] | Mark as ID with custom generator |
[Polimorphic] | Mark a class for polymorphic serialization (stores $type discriminator) |