🎭 zctor Documentation

A lightweight, high-performance actor framework for Zig

πŸ“– View on GitHub

zctor Documentation Book

A comprehensive guide to the zctor actor framework for Zig.

Generated on 2025-06-20 06:41:26


Table of Contents

  1. Introduction to the Documentation
  2. Introduction
  3. Installation
  4. Quick Start
  5. Architecture
  6. API Reference
  7. Examples
  8. Best Practices
  9. Advanced Topics
  10. Contributing
  11. Appendix

1. Introduction to the Documentation

Welcome to the comprehensive documentation for zctor, a lightweight, high-performance actor framework for Zig.

πŸ“š Documentation Structure

This documentation book is organized into the following sections:

  1. Introduction - Overview and key concepts
  2. Installation - Getting started with zctor
  3. Quick Start - Your first actor program
  4. Architecture - Core components and design
  5. API Reference - Complete API documentation
  6. Examples - Practical examples and use cases
  7. Best Practices - Tips and recommendations
  8. Advanced Topics - Advanced usage patterns
  9. Contributing - How to contribute to zctor
  10. Appendix - Additional resources and references

πŸ”§ Auto-Generation

This documentation is automatically generated from: - Source code comments and documentation - README.md content - Example code in the repository - Build system integration

Regenerate Documentation

To regenerate the documentation, use the provided build commands:

# Generate API documentation from source code
zig build docs

# Generate complete documentation book
zig build book

# Generate all documentation
zig build docs-all

Or run the scripts directly:

# Generate API reference
python3 docs/generate_docs.py src docs

# Generate complete book
python3 docs/generate_book.py docs -o docs/zctor-complete-book.md

# Validate documentation
python3 docs/generate_book.py docs --validate

πŸ“– Reading Options

Individual Chapters

Read each chapter separately for focused learning: - Start with Introduction for overview - Follow Installation and Quick Start to get started - Deep dive into Architecture for understanding - Reference API Documentation for implementation details

Complete Book

For offline reading or comprehensive study: - Markdown: zctor-complete-book.md - HTML: Generate with python3 docs/generate_book.py docs --format html

πŸ› οΈ Tools and Scripts

Documentation Generation

Features

πŸš€ Quick Start

New to zctor? Start here:

  1. Installation - Set up zctor in your project
  2. Quick Start - Build your first actor in 5 minutes
  3. Examples - See practical implementations
  4. Best Practices - Learn the recommended patterns

πŸ” Finding Information

πŸ“ Contributing to Documentation

Documentation improvements are welcome! See the Contributing guide for: - How to improve existing documentation - Adding new examples - Fixing typos and errors - Translating documentation

πŸ“„ License

This documentation is part of the zctor project and is licensed under the MIT License. See the Appendix for full license information.


2. Introduction

zctor is a lightweight, high-performance actor framework for Zig, providing concurrent message-passing with asynchronous event handling.

What is the Actor Model?

The Actor Model is a mathematical model of concurrent computation that treats "actors" as the universal primitive of concurrent computation. In response to a message it receives, an actor can:

Why zctor?

zctor brings the power of the Actor Model to Zig with:

Key Features

Performance Benefits

Safety Guarantees

Use Cases

zctor is ideal for:

Comparison with Other Frameworks

Feature zctor Akka (Scala) Elixir/OTP Go channels
Memory Safety βœ… ❌ βœ… βœ…
Performance ⚑ High 🐌 JVM overhead ⚑ High ⚑ High
Learning Curve πŸ“ˆ Moderate πŸ“ˆ Steep πŸ“ˆ Moderate πŸ“‰ Easy
Type System ⚑ Compile-time ⚑ Compile-time πŸ”„ Runtime ⚑ Compile-time
Dependencies πŸ“¦ Minimal πŸ“¦ Heavy πŸ“¦ Runtime πŸ“¦ None

Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   ActorEngine   │───▢│  ActorThread    │───▢│   Actor(T)      β”‚
β”‚                 β”‚    β”‚                 β”‚    β”‚                 β”‚
β”‚ - Thread Pool   β”‚    β”‚ - Event Loop    β”‚    β”‚ - Message Queue β”‚
β”‚ - Load Balance  β”‚    β”‚ - Actor Registryβ”‚    β”‚ - State Mgmt    β”‚
β”‚ - Lifecycle     β”‚    β”‚ - Context       β”‚    β”‚ - Handler       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Getting Started

Ready to start building with zctor? Check out the Installation Guide to get up and running quickly.

Next Steps


3. Installation

This guide covers different ways to install and set up zctor in your project.

Requirements

Installation Methods

Option 1: From Source

Clone the repository and build from source:

git clone https://github.com/YouNeedWork/zctor.git
cd zctor
zig build

Option 2: Using as a Library

Add zctor as a dependency to your Zig project.

Step 1: Add to build.zig.zon

Add zctor to your project's build.zig.zon dependencies:

.dependencies = .{
    .zctor = .{
        .url = "https://github.com/YouNeedWork/zctor/archive/main.tar.gz",
        .hash = "1220...", // Use zig fetch to get the correct hash
    },
},

Step 2: Configure build.zig

In your project's build.zig, add zctor as a dependency:

pub fn build(b: *std.Build) void {
    const target = b.standardTargetOptions(.{});
    const optimize = b.standardOptimizeOption(.{});

    // Add zctor dependency
    const zctor_dep = b.dependency("zctor", .{ 
        .target = target, 
        .optimize = optimize 
    });

    const exe = b.addExecutable(.{
        .name = "my-app",
        .root_source_file = b.path("src/main.zig"),
        .target = target,
        .optimize = optimize,
    });

    // Import zctor module
    exe.root_module.addImport("zctor", zctor_dep.module("zctor"));

    b.installArtifact(exe);
}

Step 3: Import in Your Code

Now you can import and use zctor in your Zig code:

const std = @import("std");
const zctor = @import("zctor");

const ActorEngine = zctor.ActorEngine;
const Actor = zctor.Actor;

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    var engine = try ActorEngine.init(allocator);
    defer engine.deinit();

    // Your actor code here...
}

Verifying Installation

Build Test

Verify that zctor builds correctly:

zig build

Run Tests

Run the test suite to ensure everything is working:

zig build test

Run Example

Try running the included example:

zig build run

You should see output similar to:

Actor Engine Started
Got Hello: World (count: 1, total_hellos: 1)
Got Ping: 42 (count: 2, total_pings: 1)

Project Structure

After installation, your project structure should look like:

my-project/
β”œβ”€β”€ build.zig
β”œβ”€β”€ build.zig.zon
β”œβ”€β”€ src/
β”‚   └── main.zig
└── zig-cache/
    └── dependencies/
        └── zctor/

Development Setup

Editor Support

For the best development experience, use an editor with Zig language server support:

Debug Builds

For development, use debug builds:

zig build -Doptimize=Debug

Release Builds

For production, use optimized builds:

zig build -Doptimize=ReleaseFast

Troubleshooting

Common Issues

Zig Version Mismatch

Problem: Build fails with compiler errors Solution: Ensure you're using Zig 0.14.0 or higher:

zig version

Missing libxev

Problem: Linker errors related to libxev Solution: libxev should be automatically fetched. Try:

zig build --fetch

Permission Errors

Problem: Cannot write to zig-cache Solution: Ensure you have write permissions in your project directory

Getting Help

If you encounter issues:

  1. Check the examples for working code
  2. Review the API reference for usage details
  3. Open an issue on GitHub

Next Steps

Now that you have zctor installed, you're ready to:


4. Quick Start

This guide will get you up and running with zctor in just a few minutes. We'll build a simple chat-like system to demonstrate the core concepts.

Your First Actor

Let's start with a simple "Hello World" actor that can receive and respond to messages.

Step 1: Define Message Types

First, define the types of messages your actor can handle:

const std = @import("std");
const zctor = @import("zctor");
const ActorEngine = zctor.ActorEngine;
const Actor = zctor.Actor;

// Define your message types
const ChatMessage = union(enum) {
    Hello: []const u8,
    Ping: u32,
    Stop: void,

    const Self = @This();

    // Message handler - this is where the magic happens
    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        switch (msg) {
            .Hello => |name| {
                std.debug.print("πŸ‘‹ Hello, {s}!\n", .{name});
            },
            .Ping => |value| {
                std.debug.print("πŸ“ Ping: {}\n", .{value});
            },
            .Stop => {
                std.debug.print("πŸ›‘ Stopping actor\n");
            },
        }
    }
};

Step 2: Create and Run the Actor System

Now let's create the actor system and send some messages:

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    // Create the actor engine
    var engine = try ActorEngine.init(allocator);
    defer engine.deinit();

    // Spawn an actor with our message handler
    try engine.spawn(ChatMessage, ChatMessage.handle);

    // Send some messages
    var hello_msg = ChatMessage{ .Hello = "World" };
    try engine.send(ChatMessage, &hello_msg);

    var ping_msg = ChatMessage{ .Ping = 42 };
    try engine.send(ChatMessage, &ping_msg);

    var stop_msg = ChatMessage{ .Stop = {} };
    try engine.send(ChatMessage, &stop_msg);

    // Start the engine (this will block and process messages)
    engine.start();
}

Step 3: Run Your Program

Save this as src/main.zig and run it:

zig build run

You should see output like:

Actor Engine Started
πŸ‘‹ Hello, World!
πŸ“ Ping: 42
πŸ›‘ Stopping actor

Adding State

Real actors often need to maintain state. Let's extend our example to track message counts:

const ChatMessage = union(enum) {
    Hello: []const u8,
    Ping: u32,
    GetStats: void,
    Reset: void,

    const Self = @This();

    // Define actor state
    const State = struct {
        hello_count: u32 = 0,
        ping_count: u32 = 0,
        allocator: std.mem.Allocator,

        pub fn init(allocator: std.mem.Allocator) !*State {
            const state = try allocator.create(State);
            state.* = State{
                .hello_count = 0,
                .ping_count = 0,
                .allocator = allocator,
            };
            return state;
        }
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        // Get or create state
        var state = actor.getState(State) orelse blk: {
            const new_state = State.init(actor.getAllocator()) catch |err| {
                std.debug.print("Failed to initialize state: {}\n", .{err});
                return null;
            };
            actor.setState(new_state);
            break :blk actor.getState(State).?;
        };

        switch (msg) {
            .Hello => |name| {
                state.hello_count += 1;
                std.debug.print("πŸ‘‹ Hello, {s}! (hello count: {})\n", .{ name, state.hello_count });
            },
            .Ping => |value| {
                state.ping_count += 1;
                std.debug.print("πŸ“ Ping: {} (ping count: {})\n", .{ value, state.ping_count });
            },
            .GetStats => {
                std.debug.print("πŸ“Š Stats - Hellos: {}, Pings: {}\n", .{ state.hello_count, state.ping_count });
            },
            .Reset => {
                state.hello_count = 0;
                state.ping_count = 0;
                std.debug.print("πŸ”„ Stats reset!\n");
            },
        }
    }
};

Multiple Actors

You can create multiple actor types in the same system:

// Logger actor
const LogMessage = union(enum) {
    Info: []const u8,
    Error: []const u8,

    const Self = @This();

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        _ = actor;
        switch (msg) {
            .Info => |text| std.debug.print("ℹ️  INFO: {s}\n", .{text}),
            .Error => |text| std.debug.print("❌ ERROR: {s}\n", .{text}),
        }
    }
};

// Counter actor
const CounterMessage = union(enum) {
    Increment: void,
    Decrement: void,
    Get: void,

    // ... state and handler implementation
};

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    var engine = try ActorEngine.init(allocator);
    defer engine.deinit();

    // Spawn multiple actor types
    try engine.spawn(ChatMessage, ChatMessage.handle);
    try engine.spawn(LogMessage, LogMessage.handle);
    try engine.spawn(CounterMessage, CounterMessage.handle);

    // Send messages to different actors
    var hello = ChatMessage{ .Hello = "Multi-Actor System" };
    try engine.send(ChatMessage, &hello);

    var log = LogMessage{ .Info = "System started successfully" };
    try engine.send(LogMessage, &log);

    engine.start();
}

Key Concepts Learned

Through this quick start, you've learned:

  1. Message Definition: How to define typed messages using unions
  2. Handler Functions: How to process messages in actor handlers
  3. State Management: How actors can maintain internal state
  4. Actor Lifecycle: How to create, spawn, and communicate with actors
  5. Multi-Actor Systems: How to run multiple actor types together

Common Patterns

Request-Response

// Send a message and handle the response
const RequestMessage = union(enum) {
    GetUserById: struct { id: u32, reply_to: *Actor(ResponseMessage) },
    // ... other requests
};

const ResponseMessage = union(enum) {
    UserFound: User,
    UserNotFound: void,
    // ... other responses
};

Supervisor Pattern

// Supervisor that manages child actors
const SupervisorMessage = union(enum) {
    SpawnChild: []const u8,
    ChildFailed: u32,
    RestartChild: u32,
    // ... supervisor operations
};

Publisher-Subscriber

// Event-driven communication
const EventMessage = union(enum) {
    Subscribe: []const u8,
    Unsubscribe: []const u8,
    Publish: struct { topic: []const u8, data: []const u8 },
    // ... event operations
};

What's Next?

Now that you understand the basics, explore these topics:


5. Architecture

This chapter provides a deep dive into zctor's architecture, explaining how the various components work together to provide efficient actor-based concurrency.

Overview

zctor is built around a multi-layered architecture that provides:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     ActorEngine                             β”‚
β”‚  - Thread Pool Management                                   β”‚
β”‚  - Load Balancing                                          β”‚
β”‚  - System Lifecycle                                        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                   ActorThread                               β”‚
β”‚  - Event Loop (libxev)                                     β”‚
β”‚  - Actor Registry                                          β”‚
β”‚  - Message Routing                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Actor(T)                                β”‚
β”‚  - Message Queue (FIFO)                                    β”‚
β”‚  - State Management                                        β”‚
β”‚  - Message Handler                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Core Components

ActorEngine

The ActorEngine is the entry point and orchestrator for the entire actor system.

Responsibilities: - Thread Pool Management: Creates and manages worker threads based on CPU cores - Actor Spawning: Creates new actors and assigns them to threads - Load Balancing: Distributes actors across available threads - System Lifecycle: Handles startup, shutdown, and cleanup

Key Methods:

pub fn init(allocator: std.mem.Allocator) !*ActorEngine
pub fn spawn(comptime T: type, handler: fn) !void
pub fn send(comptime T: type, msg: *T) !void
pub fn start() void
pub fn stop() void
pub fn deinit() void

Threading Model: - Automatically detects CPU core count - Creates one thread per CPU core for optimal performance - Uses a thread-per-core model to minimize context switching - Employs work-stealing for load balancing

ActorThread

Each ActorThread manages a collection of actors within a single OS thread context.

Responsibilities: - Event Loop: Runs libxev event loop for async I/O - Actor Registry: Maintains local registry of actors - Message Dispatch: Routes messages to appropriate actors - Context Management: Provides runtime services to actors

Key Features: - Isolation: Each thread operates independently - Non-blocking: Uses async event loop for I/O operations - Type Safety: Maintains type information for message routing - Efficient Dispatch: O(1) actor lookup by type name

Actor(T)

Individual actor instances that process messages of type T.

Responsibilities: - Message Processing: Handles incoming messages sequentially - State Management: Maintains private state between messages - Queue Management: Manages FIFO message queue - Lifecycle: Handles initialization and cleanup

Key Features: - Type Safety: Compile-time type checking for messages - State Isolation: Private state not accessible to other actors - Sequential Processing: Messages processed one at a time - Memory Safety: Automatic memory management with allocators

Context

Provides runtime services and communication facilities to actors.

Services: - Message Sending: Send messages to other actors - System Information: Access to thread ID, engine reference - Resource Management: Allocator access, cleanup hooks - Event Loop: Direct access to underlying event loop

Message Flow

Understanding how messages flow through the system is crucial for effective actor design.

Message Lifecycle

  1. Creation: Message created by sender
  2. Routing: Engine routes to appropriate thread
  3. Queuing: Message added to actor's FIFO queue
  4. Processing: Actor handler processes message
  5. Cleanup: Message memory cleaned up
Sender                 ActorEngine              ActorThread               Actor
  β”‚                         β”‚                        β”‚                     β”‚
  β”‚ 1. send(MessageType, msg)                        β”‚                     β”‚
  │─────────────────────────▢│                        β”‚                     β”‚
  β”‚                         β”‚ 2. route to thread     β”‚                     β”‚
  β”‚                         │────────────────────────▢│                     β”‚
  β”‚                         β”‚                        β”‚ 3. queue message    β”‚
  β”‚                         β”‚                        │────────────────────▢│
  β”‚                         β”‚                        β”‚                     β”‚ 4. process
  β”‚                         β”‚                        β”‚                     │────────┐
  β”‚                         β”‚                        β”‚                     β”‚        β”‚
  β”‚                         β”‚                        β”‚                     β”‚β—€β”€β”€β”€β”€β”€β”€β”€β”˜
  β”‚                         β”‚                        β”‚ 5. cleanup          β”‚
  β”‚                         β”‚                        │◀────────────────────│

Type Safety

zctor provides compile-time type safety for messages:

// Define message type
const MyMessage = union(enum) {
    Hello: []const u8,
    Count: u32,
};

// Handler must match the type
pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void {
    // Compile error if types don't match
}

// Sending requires exact type match
var msg = MyMessage{ .Hello = "world" };
try engine.send(MyMessage, &msg);  // Type checked at compile time

Asynchronous Processing

Messages are processed asynchronously using libxev:

  1. Event Registration: Actor registers with event loop
  2. Message Arrival: New message triggers event
  3. Callback Execution: Event loop calls actor callback
  4. Message Processing: Handler function processes message
  5. Rearm: Actor re-registers for next message

Threading Model

Thread-per-Core Design

zctor uses a thread-per-core model for optimal performance:

const cpu_count = try std.Thread.getCpuCount();
// Create one thread per CPU core
for (0..cpu_count) |i| {
    const thread = try std.Thread.spawn(.{}, threadFunc, .{i});
    threads[i] = thread;
}

Benefits: - CPU Affinity: Threads can be pinned to specific cores - Cache Locality: Better L1/L2 cache utilization - Reduced Contention: Less lock contention between threads - Predictable Performance: More consistent latency

Work Distribution

Actors are distributed across threads using simple round-robin:

pub fn spawn(comptime T: type, handler: fn) !void {
    const thread_idx = self.next_thread % self.thread_count;
    try self.threads[thread_idx].addActor(T, handler);
    self.next_thread += 1;
}

Future Improvements: - Work-stealing when threads become imbalanced - Actor migration for load balancing - CPU affinity management

Memory Management

Allocator Strategy

zctor uses a hierarchical allocator strategy:

  1. System Allocator: Top-level allocator for engine
  2. Thread Allocators: Per-thread allocators for isolation
  3. Actor Allocators: Per-actor allocators for state
// Engine creates thread allocators
const thread_allocator = self.allocator.child();

// Thread creates actor allocators  
const actor_allocator = self.allocator.child();

// Actor uses its allocator for state
const state = try actor.allocator.create(State);

State Management

Actor state is managed automatically:

pub fn getState(self: *Self, comptime S: type) ?*S {
    return @ptrCast(@alignCast(self.current_state));
}

pub fn setState(self: *Self, state: *anyopaque) void {
    self.current_state = state;
}

Features: - Type Safety: State type checked at compile time - Lazy Initialization: State created on first access - Automatic Cleanup: State freed when actor terminates - No Sharing: State is private to each actor

Performance Characteristics

Throughput

Latency

Scalability

Error Handling

Error Propagation

Errors in zctor follow Zig's explicit error handling:

pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void {
    // Return null to indicate error
    const state = actor.getState(State) orelse return null;

    // Error handling in message processing
    doSomething() catch |err| {
        std.log.err("Failed to process: {}", .{err});
        return null;
    };
}

Fault Isolation

Integration with libxev

zctor is built on top of libxev for efficient event handling:

Event Loop Integration

// Each thread runs its own event loop
pub fn start_loop(self: *Self) !void {
    try self.loop.run(.until_done);
}

// Actors integrate with the event loop
self.event.wait(self.ctx.loop, &self.completion, Self, self, Self.actorCallback);

Async Operations

libxev enables: - Non-blocking I/O: File and network operations - Timers: Scheduled message delivery - Signals: System signal handling - Cross-platform: Works on Linux, macOS, Windows

Next Steps

Now that you understand the architecture, explore:


6. API Reference

This is the complete API reference for zctor, automatically generated from source code.

actor_engine.zig

Module Documentation

Register all actor types from a thread in the global registry

Send a message to an actor of type T

Uses round-robin load balancing across all threads that have this actor type

Send a message directly to a specific actor instance

Call an actor of type T and wait for response

Uses round-robin load balancing across all threads that have this actor type

Broadcast a message to all threads that have this actor type (useful for pub-sub patterns)

Get the number of active threads

Get actor registry for debugging

Functions

init

pub fn init(allocator: std.mem.Allocator) !*Self

deinit

pub fn deinit(self: *Self) void

start

pub fn start(self: *Self) void

stop

pub fn stop(self: *Self) void

spawn

pub fn spawn(self: *Self, actor_thread: *ActorThread) !void

send

Send a message to an actor of type T Uses round-robin load balancing across all threads that have this actor type

pub fn send(self: *Self, comptime T: anytype, msg_ptr: *anyopaque) !void

send_to_actor

Send a message directly to a specific actor instance

pub fn send_to_actor(_: *Self, actor: *Actor, comptime T: anytype, msg_ptr: *anyopaque) !void

call

Call an actor of type T and wait for response Uses round-robin load balancing across all threads that have this actor type

pub fn call(self: *Self, comptime T: anytype, msg_ptr: *anyopaque) !?*anyopaque

broadcast

Broadcast a message to all threads that have this actor type (useful for pub-sub patterns)

pub fn broadcast(self: *Self, comptime T: anytype, msg_ptr: *anyopaque) !void

getThreadCount

Get the number of active threads

pub fn getThreadCount(self: *Self) usize

getActorRegistry

Get actor registry for debugging

pub fn getActorRegistry(self: *Self) *const std.StringArrayHashMap(std.ArrayList(usize))

actor_interface.zig

Types

VTable

struct

Functions

run

pub fn run(self: Self) void

add_ctx

pub fn add_ctx(self: Self, ctx: *Context) void

deinit

pub fn deinit(self: Self, allocator: std.mem.Allocator) void

call

pub fn call(self: Self, msg: *anyopaque) ?*anyopaque

send

pub fn send(self: Self, msg: *anyopaque) void

init

pub fn init(actor: anytype) Self

context.zig

Functions

init

pub fn init(allocator: std.mem.Allocator, loop: *xev.Loop, actor_engine: *ActorEngine, thread_id: u64) !*Self

deinit

pub fn deinit(self: *Self, allocator: std.mem.Allocator) void

send

pub fn send(self: *Self, comptime T: type, msg_ptr: *anyopaque) !void

call

pub fn call(self: *Self, comptime T: type, msg_ptr: *anyopaque) !?*anyopaque

root.zig

Module Documentation

zctor - A lightweight actor framework for Zig

This library provides an implementation of the Actor Model with:


actor.zig

Functions

Actor

pub fn Actor(comptime T: type) type

init

pub fn init(allocator: std.mem.Allocator, handler: *const fn (*Self, T) ?*anyopaque) !*Self

run

pub fn run(self: *Self) void

send

pub fn send(self: *Self, msg_ptr: *anyopaque) !void

call

pub fn call(self: *Self, msg_ptr: *anyopaque) !*anyopaque

add_ctx

pub fn add_ctx(self: *Self, ctx: *Context) void

deinit

pub fn deinit(self: *Self, allocator: std.mem.Allocator) void

getState

pub fn getState(self: *Self, comptime S: anytype) ?*S

setState

pub fn setState(self: *Self, state: *anyopaque) void

resetState

pub fn resetState(self: *Self) void

getContext

pub fn getContext(self: *Self) *Context

getAllocator

pub fn getAllocator(self: *Self) std.mem.Allocator

actor_thread.zig

Functions

init

pub fn init(allocator: std.mem.Allocator) !*Self

init_ctx

pub fn init_ctx(self: *Self, actor_engine: *ActorEngine, thread_id: u64) !void

registerActor

pub fn registerActor(self: *Self, actor: anytype) !void

send

pub fn send(self: *Self, comptime T: type, msg_ptr: *T) !void

publish

pub fn publish(self: *Self, comptime T: type, msg_ptr: *anyopaque) void

deinit

pub fn deinit(self: *Self, allocator: std.mem.Allocator) void

run

pub fn run(self: *Self) !void

start_loop

pub fn start_loop(self: *Self) !void

call

pub fn call(self: *Self, comptime T: type, msg_ptr: *T) !?*anyopaque

one_shot.zig

Module Documentation

A one-shot channel that can send exactly one value from sender to receiver

Uses atomic operations and spinning for synchronization

Initialize an empty one-shot channel

Send a value through the channel

Returns true if successful, false if already used

Receive a value from the channel (blocking with spinning)

Returns the value if successful, null if channel was already consumed

Try to receive without blocking

Returns the value if available, null otherwise

Check if the channel is ready to be consumed

Check if the channel has been consumed

Check if the channel is still empty

Convenience wrapper that provides sender and receiver handles

Sender handle for one-shot channel

Receiver handle for one-shot channel

Functions

OneShot

A one-shot channel that can send exactly one value from sender to receiver Uses atomic operations and spinning for synchronization

pub fn OneShot(comptime T: type) type

init

Initialize an empty one-shot channel

pub fn init() Self

send

Send a value through the channel Returns true if successful, false if already used

pub fn send(self: *Self, value: T) bool

receive

Receive a value from the channel (blocking with spinning) Returns the value if successful, null if channel was already consumed

pub fn receive(self: *Self) ?T

tryReceive

Try to receive without blocking Returns the value if available, null otherwise

pub fn tryReceive(self: *Self) ?T

isReady

Check if the channel is ready to be consumed

pub fn isReady(self: *Self) bool

isConsumed

Check if the channel has been consumed

pub fn isConsumed(self: *Self) bool

isEmpty

Check if the channel is still empty

pub fn isEmpty(self: *Self) bool

oneShotChannel

Convenience wrapper that provides sender and receiver handles

pub fn oneShotChannel(comptime T: type) struct

Sender

Sender handle for one-shot channel

pub fn Sender(comptime T: type) type

send

pub fn send(self: Self, value: T) bool

deinit

pub fn deinit(self: Self) void

Receiver

Receiver handle for one-shot channel

pub fn Receiver(comptime T: type) type

receive

pub fn receive(self: Self) ?T

tryReceive

pub fn tryReceive(self: Self) ?T

isReady

pub fn isReady(self: Self) bool

isConsumed

pub fn isConsumed(self: Self) bool

isEmpty

pub fn isEmpty(self: Self) bool


7. Examples

This chapter provides practical examples demonstrating how to use zctor in real-world scenarios.

Basic Examples

Simple Counter Actor

A basic counter that tracks increments and decrements:

const std = @import("std");
const zctor = @import("zctor");
const ActorEngine = zctor.ActorEngine;
const Actor = zctor.Actor;

const CounterMessage = union(enum) {
    Increment: void,
    Decrement: void,
    Get: void,
    Set: i32,

    const Self = @This();

    const State = struct {
        value: i32 = 0,
        allocator: std.mem.Allocator,

        pub fn init(allocator: std.mem.Allocator) !*State {
            const state = try allocator.create(State);
            state.* = State{ .value = 0, .allocator = allocator };
            return state;
        }
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = actor.getState(State) orelse blk: {
            const new_state = State.init(actor.getAllocator()) catch return null;
            actor.setState(new_state);
            break :blk actor.getState(State).?;
        };

        switch (msg) {
            .Increment => {
                state.value += 1;
                std.debug.print("Counter: {}\n", .{state.value});
            },
            .Decrement => {
                state.value -= 1;
                std.debug.print("Counter: {}\n", .{state.value});
            },
            .Get => {
                std.debug.print("Current value: {}\n", .{state.value});
            },
            .Set => |value| {
                state.value = value;
                std.debug.print("Counter set to: {}\n", .{state.value});
            },
        }
    }
};

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();

    var engine = try ActorEngine.init(gpa.allocator());
    defer engine.deinit();

    try engine.spawn(CounterMessage, CounterMessage.handle);

    // Send some messages
    var inc = CounterMessage.Increment;
    try engine.send(CounterMessage, &inc);

    var set = CounterMessage{ .Set = 42 };
    try engine.send(CounterMessage, &set);

    var get = CounterMessage.Get;
    try engine.send(CounterMessage, &get);

    engine.start();
}

Echo Server

An echo server that responds to incoming messages:

const EchoMessage = union(enum) {
    Echo: []const u8,
    Ping: void,

    const Self = @This();

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        _ = actor;
        switch (msg) {
            .Echo => |text| {
                std.debug.print("Echo: {s}\n", .{text});
            },
            .Ping => {
                std.debug.print("Pong!\n");
            },
        }
    }
};

Intermediate Examples

Request-Response Pattern

Implementing request-response communication between actors:

const RequestId = u32;

const DatabaseMessage = union(enum) {
    GetUser: struct { id: u32, request_id: RequestId },
    SetUser: struct { id: u32, name: []const u8, request_id: RequestId },

    const Self = @This();

    const User = struct {
        id: u32,
        name: []const u8,
    };

    const State = struct {
        users: std.HashMap(u32, User, std.hash_map.DefaultContext(u32), std.hash_map.default_max_load_percentage),
        allocator: std.mem.Allocator,

        pub fn init(allocator: std.mem.Allocator) !*State {
            const state = try allocator.create(State);
            state.* = State{
                .users = std.HashMap(u32, User, std.hash_map.DefaultContext(u32), std.hash_map.default_max_load_percentage).init(allocator),
                .allocator = allocator,
            };
            return state;
        }
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = actor.getState(State) orelse blk: {
            const new_state = State.init(actor.getAllocator()) catch return null;
            actor.setState(new_state);
            break :blk actor.getState(State).?;
        };

        switch (msg) {
            .GetUser => |req| {
                if (state.users.get(req.id)) |user| {
                    std.debug.print("Found user {}: {s}\n", .{ user.id, user.name });
                } else {
                    std.debug.print("User {} not found\n", .{req.id});
                }
            },
            .SetUser => |req| {
                const user = User{ .id = req.id, .name = req.name };
                state.users.put(req.id, user) catch return null;
                std.debug.print("User {} saved: {s}\n", .{ req.id, req.name });
            },
        }
    }
};

const ClientMessage = union(enum) {
    SendRequest: void,

    const Self = @This();

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        switch (msg) {
            .SendRequest => {
                // Send request to database actor
                var req = DatabaseMessage{ .SetUser = .{ .id = 1, .name = "Alice", .request_id = 123 } };
                actor.getContext().send(DatabaseMessage, &req) catch |err| {
                    std.debug.print("Failed to send request: {}\n", .{err});
                };
            },
        }
    }
};

Publisher-Subscriber Pattern

Event-driven communication using pub-sub:

const EventMessage = union(enum) {
    Subscribe: struct { topic: []const u8, subscriber_id: u32 },
    Unsubscribe: struct { topic: []const u8, subscriber_id: u32 },
    Publish: struct { topic: []const u8, data: []const u8 },

    const Self = @This();

    const Subscription = struct {
        topic: []const u8,
        subscriber_id: u32,
    };

    const State = struct {
        subscriptions: std.ArrayList(Subscription),
        allocator: std.mem.Allocator,

        pub fn init(allocator: std.mem.Allocator) !*State {
            const state = try allocator.create(State);
            state.* = State{
                .subscriptions = std.ArrayList(Subscription).init(allocator),
                .allocator = allocator,
            };
            return state;
        }
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = actor.getState(State) orelse blk: {
            const new_state = State.init(actor.getAllocator()) catch return null;
            actor.setState(new_state);
            break :blk actor.getState(State).?;
        };

        switch (msg) {
            .Subscribe => |sub| {
                const subscription = Subscription{
                    .topic = sub.topic,
                    .subscriber_id = sub.subscriber_id,
                };
                state.subscriptions.append(subscription) catch return null;
                std.debug.print("Subscriber {} subscribed to {s}\n", .{ sub.subscriber_id, sub.topic });
            },
            .Unsubscribe => |unsub| {
                for (state.subscriptions.items, 0..) |subscription, i| {
                    if (subscription.subscriber_id == unsub.subscriber_id and 
                        std.mem.eql(u8, subscription.topic, unsub.topic)) {
                        _ = state.subscriptions.orderedRemove(i);
                        std.debug.print("Subscriber {} unsubscribed from {s}\n", .{ unsub.subscriber_id, unsub.topic });
                        break;
                    }
                }
            },
            .Publish => |pub_msg| {
                std.debug.print("Publishing to {s}: {s}\n", .{ pub_msg.topic, pub_msg.data });
                for (state.subscriptions.items) |subscription| {
                    if (std.mem.eql(u8, subscription.topic, pub_msg.topic)) {
                        std.debug.print("  -> Notifying subscriber {}\n", .{subscription.subscriber_id});
                        // In a real implementation, you'd send the message to the subscriber
                    }
                }
            },
        }
    }
};

Advanced Examples

Supervisor Pattern

A supervisor that manages child actors and handles failures:

const SupervisorMessage = union(enum) {
    SpawnChild: struct { name: []const u8 },
    ChildFailed: struct { name: []const u8, error_msg: []const u8 },
    RestartChild: struct { name: []const u8 },
    GetStatus: void,

    const Self = @This();

    const ChildInfo = struct {
        name: []const u8,
        status: enum { running, failed, restarting },
        restart_count: u32,
    };

    const State = struct {
        children: std.HashMap([]const u8, ChildInfo, std.hash_map.StringContext, std.hash_map.default_max_load_percentage),
        allocator: std.mem.Allocator,

        pub fn init(allocator: std.mem.Allocator) !*State {
            const state = try allocator.create(State);
            state.* = State{
                .children = std.HashMap([]const u8, ChildInfo, std.hash_map.StringContext, std.hash_map.default_max_load_percentage).init(allocator),
                .allocator = allocator,
            };
            return state;
        }
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = actor.getState(State) orelse blk: {
            const new_state = State.init(actor.getAllocator()) catch return null;
            actor.setState(new_state);
            break :blk actor.getState(State).?;
        };

        switch (msg) {
            .SpawnChild => |spawn| {
                const child = ChildInfo{
                    .name = spawn.name,
                    .status = .running,
                    .restart_count = 0,
                };
                state.children.put(spawn.name, child) catch return null;
                std.debug.print("Spawned child: {s}\n", .{spawn.name});
            },
            .ChildFailed => |failed| {
                if (state.children.getPtr(failed.name)) |child| {
                    child.status = .failed;
                    std.debug.print("Child {s} failed: {s}\n", .{ failed.name, failed.error_msg });

                    // Auto-restart if restart count is below threshold
                    if (child.restart_count < 3) {
                        var restart = Self{ .RestartChild = .{ .name = failed.name } };
                        // In a real implementation, you'd send this message back to yourself
                        _ = restart;
                    }
                }
            },
            .RestartChild => |restart| {
                if (state.children.getPtr(restart.name)) |child| {
                    child.status = .restarting;
                    child.restart_count += 1;
                    std.debug.print("Restarting child: {s} (attempt {})\n", .{ restart.name, child.restart_count });

                    // Simulate restart delay and success
                    child.status = .running;
                    std.debug.print("Child {s} restarted successfully\n", .{restart.name});
                }
            },
            .GetStatus => {
                std.debug.print("Supervisor Status:\n");
                var iterator = state.children.iterator();
                while (iterator.next()) |entry| {
                    const child = entry.value_ptr.*;
                    std.debug.print("  {s}: {} (restarts: {})\n", .{ child.name, child.status, child.restart_count });
                }
            },
        }
    }
};

Performance Monitoring

An actor that monitors system performance:

const MonitorMessage = union(enum) {
    StartMonitoring: void,
    StopMonitoring: void,
    GetMetrics: void,
    RecordMetric: struct { name: []const u8, value: f64 },

    const Self = @This();

    const Metric = struct {
        name: []const u8,
        value: f64,
        timestamp: i64,
    };

    const State = struct {
        metrics: std.ArrayList(Metric),
        monitoring: bool,
        allocator: std.mem.Allocator,

        pub fn init(allocator: std.mem.Allocator) !*State {
            const state = try allocator.create(State);
            state.* = State{
                .metrics = std.ArrayList(Metric).init(allocator),
                .monitoring = false,
                .allocator = allocator,
            };
            return state;
        }
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = actor.getState(State) orelse blk: {
            const new_state = State.init(actor.getAllocator()) catch return null;
            actor.setState(new_state);
            break :blk actor.getState(State).?;
        };

        switch (msg) {
            .StartMonitoring => {
                state.monitoring = true;
                std.debug.print("Performance monitoring started\n");
            },
            .StopMonitoring => {
                state.monitoring = false;
                std.debug.print("Performance monitoring stopped\n");
            },
            .GetMetrics => {
                std.debug.print("Performance Metrics:\n");
                for (state.metrics.items) |metric| {
                    std.debug.print("  {s}: {d:.2} ({})\n", .{ metric.name, metric.value, metric.timestamp });
                }
            },
            .RecordMetric => |record| {
                if (state.monitoring) {
                    const metric = Metric{
                        .name = record.name,
                        .value = record.value,
                        .timestamp = std.time.timestamp(),
                    };
                    state.metrics.append(metric) catch return null;
                    std.debug.print("Recorded metric {s}: {d:.2}\n", .{ record.name, record.value });
                }
            },
        }
    }
};

Real-World Use Cases

Chat Server

A simple chat server using multiple actor types:

// User session actor
const SessionMessage = union(enum) {
    Connect: struct { user_id: u32, username: []const u8 },
    Disconnect: void,
    SendMessage: struct { content: []const u8 },
    ReceiveMessage: struct { from: []const u8, content: []const u8 },
};

// Chat room actor
const RoomMessage = union(enum) {
    Join: struct { user_id: u32, username: []const u8 },
    Leave: struct { user_id: u32 },
    Broadcast: struct { from: []const u8, content: []const u8 },
    GetUsers: void,
};

// Message router actor
const RouterMessage = union(enum) {
    RouteToUser: struct { user_id: u32, message: []const u8 },
    RouteToRoom: struct { room_id: u32, message: []const u8 },
    RegisterUser: struct { user_id: u32, session_ref: *Actor(SessionMessage) },
};

Data Processing Pipeline

A data processing pipeline with multiple stages:

// Data ingestion actor
const IngestMessage = union(enum) {
    ProcessFile: []const u8,
    ProcessData: []const u8,
};

// Data transformation actor
const TransformMessage = union(enum) {
    Transform: struct { data: []const u8, format: enum { json, csv, xml } },
    ValidateData: []const u8,
};

// Data output actor
const OutputMessage = union(enum) {
    SaveToDatabase: []const u8,
    SaveToFile: struct { filename: []const u8, data: []const u8 },
    SendToApi: struct { endpoint: []const u8, payload: []const u8 },
};

Testing Examples

Unit Testing Actors

const testing = std.testing;

test "Counter actor increments correctly" {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();

    var engine = try ActorEngine.init(gpa.allocator());
    defer engine.deinit();

    try engine.spawn(CounterMessage, CounterMessage.handle);

    // Send increment message
    var inc = CounterMessage.Increment;
    try engine.send(CounterMessage, &inc);

    // Verify state (in a real test, you'd need a way to inspect actor state)
    // This is a simplified example
}

test "Echo actor responds correctly" {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();

    var engine = try ActorEngine.init(gpa.allocator());
    defer engine.deinit();

    try engine.spawn(EchoMessage, EchoMessage.handle);

    var echo = EchoMessage{ .Echo = "test message" };
    try engine.send(EchoMessage, &echo);

    // In a real test, you'd capture output or use a test harness
}

Next Steps

These examples demonstrate the flexibility and power of zctor. To learn more:


8. Best Practices

This chapter covers best practices, design patterns, and optimization techniques for building production-ready applications with zctor.

Design Principles

Single Responsibility

Each actor should have a single, well-defined responsibility:

// Good: Focused responsibility
const UserAuthenticator = union(enum) {
    Login: struct { username: []const u8, password: []const u8 },
    Logout: struct { session_id: []const u8 },
    ValidateSession: struct { session_id: []const u8 },
};

// Avoid: Mixed responsibilities
const UserManager = union(enum) {
    Login: struct { username: []const u8, password: []const u8 },
    SaveFile: struct { filename: []const u8, data: []const u8 },
    SendEmail: struct { to: []const u8, subject: []const u8 },
    CalculateTax: struct { amount: f64 },
};

Immutable Messages

Design messages to be immutable and contain all necessary data:

// Good: Self-contained message
const ProcessOrder = struct {
    order_id: u32,
    customer_id: u32,
    items: []OrderItem,
    total_amount: f64,
    currency: Currency,
    timestamp: i64,
};

// Avoid: Mutable or incomplete messages
const ProcessOrder = struct {
    order_id: u32,
    // Missing essential data - actor would need to fetch it
};

Type-Safe Message Design

Leverage Zig's type system for safe message handling:

// Use enums for discrete states
const UserStatus = enum { active, inactive, suspended, deleted };

// Use tagged unions for different message types
const DatabaseMessage = union(enum) {
    Read: struct { table: []const u8, id: u32 },
    Write: struct { table: []const u8, data: []const u8 },
    Delete: struct { table: []const u8, id: u32 },

    // Compile-time validation of message types
    pub fn validate(self: @This()) bool {
        return switch (self) {
            .Read => |r| r.table.len > 0 and r.id > 0,
            .Write => |w| w.table.len > 0 and w.data.len > 0,
            .Delete => |d| d.table.len > 0 and d.id > 0,
        };
    }
};

State Management

Lazy State Initialization

Initialize state only when needed to optimize memory usage:

const State = struct {
    cache: ?std.HashMap(u32, []const u8, std.hash_map.DefaultContext(u32), std.hash_map.default_max_load_percentage),
    allocator: std.mem.Allocator,

    pub fn init(allocator: std.mem.Allocator) !*State {
        const state = try allocator.create(State);
        state.* = State{
            .cache = null, // Lazy initialization
            .allocator = allocator,
        };
        return state;
    }

    pub fn getCache(self: *State) !*std.HashMap(u32, []const u8, std.hash_map.DefaultContext(u32), std.hash_map.default_max_load_percentage) {
        if (self.cache == null) {
            self.cache = std.HashMap(u32, []const u8, std.hash_map.DefaultContext(u32), std.hash_map.default_max_load_percentage).init(self.allocator);
        }
        return &self.cache.?;
    }
};

State Validation

Validate state consistency in debug builds:

const State = struct {
    counter: u32,
    max_value: u32,

    pub fn validate(self: *const State) bool {
        return self.counter <= self.max_value;
    }

    pub fn increment(self: *State) !void {
        if (self.counter >= self.max_value) {
            return error.CounterOverflow;
        }
        self.counter += 1;

        // Assert invariants in debug builds
        if (builtin.mode == .Debug) {
            std.debug.assert(self.validate());
        }
    }
};

Memory-Efficient State

Use memory pools and compact data structures:

const State = struct {
    // Use ArrayList instead of HashMap when appropriate
    recent_messages: std.ArrayList(Message),

    // Use fixed-size arrays for bounded data
    connection_pool: [16]?Connection,

    // Use bit fields for flags
    flags: packed struct {
        is_authenticated: bool,
        is_admin: bool,
        notifications_enabled: bool,
        _padding: u5 = 0,
    },

    allocator: std.mem.Allocator,
};

Error Handling

Graceful Error Recovery

Design actors to handle errors gracefully:

pub fn handle(actor: *Actor(DatabaseMessage), msg: DatabaseMessage) ?void {
    switch (msg) {
        .Query => |query| {
            // Try primary database
            executeQuery(query.sql) catch |err| switch (err) {
                error.ConnectionTimeout => {
                    // Retry with backup database
                    executeQueryOnBackup(query.sql) catch |backup_err| {
                        std.log.err("Both primary and backup databases failed: {} / {}", .{ err, backup_err });
                        // Send failure notification
                        notifyQueryFailure(query.request_id);
                        return;
                    };
                },
                error.InvalidQuery => {
                    std.log.err("Invalid query: {s}", .{query.sql});
                    // Don't retry, just log and continue
                    return;
                },
                else => {
                    std.log.err("Database error: {}", .{err});
                    return null; // Signal actor error
                },
            };
        },
    }
}

Error Propagation

Use explicit error types for clear error handling:

const ProcessingError = error{
    InvalidInput,
    NetworkTimeout,
    DatabaseError,
    InsufficientMemory,
};

const ProcessMessage = struct {
    data: []const u8,
    callback: ?fn (result: ProcessingError![]const u8) void,
};

pub fn handle(actor: *Actor(ProcessMessage), msg: ProcessMessage) ?void {
    const result = processData(msg.data) catch |err| {
        if (msg.callback) |cb| {
            cb(err);
        }
        return; // Continue processing other messages
    };

    if (msg.callback) |cb| {
        cb(result);
    }
}

Performance Optimization

Message Pooling

Reuse message objects to reduce allocations:

const MessagePool = struct {
    pool: std.ArrayList(*Message),
    allocator: std.mem.Allocator,

    pub fn get(self: *MessagePool) !*Message {
        if (self.pool.items.len > 0) {
            return self.pool.pop();
        }
        return try self.allocator.create(Message);
    }

    pub fn put(self: *MessagePool, msg: *Message) !void {
        // Reset message state
        msg.* = std.mem.zeroes(Message);
        try self.pool.append(msg);
    }
};

Batch Processing

Process multiple messages in batches for better throughput:

const BatchProcessor = union(enum) {
    AddItem: []const u8,
    ProcessBatch: void,

    const Self = @This();

    const State = struct {
        batch: std.ArrayList([]const u8),
        batch_size: usize,
        last_process: i64,

        pub fn shouldProcess(self: *State) bool {
            const now = std.time.timestamp();
            return self.batch.items.len >= self.batch_size or 
                   (now - self.last_process) > 1000; // 1 second timeout
        }
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = getOrCreateState(actor);

        switch (msg) {
            .AddItem => |item| {
                state.batch.append(item) catch return null;

                if (state.shouldProcess()) {
                    var process_msg = Self.ProcessBatch;
                    actor.sender(process_msg) catch {};
                }
            },
            .ProcessBatch => {
                if (state.batch.items.len > 0) {
                    processBatch(state.batch.items);
                    state.batch.clearRetainingCapacity();
                    state.last_process = std.time.timestamp();
                }
            },
        }
    }
};

Memory Management

Use appropriate allocators for different use cases:

const State = struct {
    // Use arena allocator for temporary data
    arena: std.heap.ArenaAllocator,

    // Use general purpose allocator for long-lived data
    persistent_data: std.HashMap(u32, []const u8, std.hash_map.DefaultContext(u32), std.hash_map.default_max_load_percentage),

    pub fn init(allocator: std.mem.Allocator) !*State {
        const state = try allocator.create(State);
        state.* = State{
            .arena = std.heap.ArenaAllocator.init(allocator),
            .persistent_data = std.HashMap(u32, []const u8, std.hash_map.DefaultContext(u32), std.hash_map.default_max_load_percentage).init(allocator),
        };
        return state;
    }

    pub fn processTemporaryData(self: *State, data: []const u8) !void {
        // Use arena for temporary allocations
        const temp_allocator = self.arena.allocator();
        const processed = try processData(temp_allocator, data);

        // Store result in persistent storage
        try self.persistent_data.put(calculateHash(data), processed);

        // Clear arena for next use
        _ = self.arena.reset(.retain_capacity);
    }
};

Testing Strategies

Unit Testing Actors

Create testable actor handlers:

// Separate business logic from actor infrastructure
const Calculator = struct {
    pub fn add(a: i32, b: i32) i32 {
        return a + b;
    }

    pub fn multiply(a: i32, b: i32) i32 {
        return a * b;
    }
};

const CalculatorMessage = union(enum) {
    Add: struct { a: i32, b: i32 },
    Multiply: struct { a: i32, b: i32 },

    const Self = @This();

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        _ = actor;
        switch (msg) {
            .Add => |op| {
                const result = Calculator.add(op.a, op.b);
                std.debug.print("Result: {}\n", .{result});
            },
            .Multiply => |op| {
                const result = Calculator.multiply(op.a, op.b);
                std.debug.print("Result: {}\n", .{result});
            },
        }
    }
};

// Test business logic separately
test "Calculator add function" {
    try testing.expectEqual(@as(i32, 5), Calculator.add(2, 3));
}

test "Calculator multiply function" {
    try testing.expectEqual(@as(i32, 6), Calculator.multiply(2, 3));
}

Integration Testing

Test actor interactions:

const TestHarness = struct {
    engine: *ActorEngine,
    responses: std.ArrayList([]const u8),

    pub fn init(allocator: std.mem.Allocator) !TestHarness {
        return TestHarness{
            .engine = try ActorEngine.init(allocator),
            .responses = std.ArrayList([]const u8).init(allocator),
        };
    }

    pub fn expectResponse(self: *TestHarness, expected: []const u8) !void {
        // Wait for response or timeout
        const timeout = std.time.timestamp() + 5; // 5 second timeout

        while (std.time.timestamp() < timeout) {
            if (self.responses.items.len > 0) {
                const actual = self.responses.orderedRemove(0);
                try testing.expectEqualStrings(expected, actual);
                return;
            }
            std.time.sleep(1000000); // 1ms
        }

        return error.TimeoutWaitingForResponse;
    }
};

Mock Actors

Create mock actors for testing:

const MockDatabaseMessage = union(enum) {
    Query: struct { sql: []const u8, callback: fn ([]const u8) void },

    const Self = @This();

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        _ = actor;
        switch (msg) {
            .Query => |query| {
                // Return predictable test data
                if (std.mem.eql(u8, query.sql, "SELECT * FROM users")) {
                    query.callback("user1,user2,user3");
                } else {
                    query.callback("error: unknown query");
                }
            },
        }
    }
};

Production Considerations

Monitoring and Observability

Add metrics and logging to your actors:

const MetricsCollector = struct {
    message_count: u64 = 0,
    error_count: u64 = 0,
    processing_time_total: u64 = 0,

    pub fn recordMessage(self: *MetricsCollector) void {
        self.message_count += 1;
    }

    pub fn recordError(self: *MetricsCollector) void {
        self.error_count += 1;
    }

    pub fn recordProcessingTime(self: *MetricsCollector, duration: u64) void {
        self.processing_time_total += duration;
    }
};

pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void {
    const start_time = std.time.nanoTimestamp();
    defer {
        const end_time = std.time.nanoTimestamp();
        const duration = @intCast(u64, end_time - start_time);
        getMetrics().recordProcessingTime(duration);
    }

    getMetrics().recordMessage();

    // Process message
    processMessage(msg) catch |err| {
        getMetrics().recordError();
        std.log.err("Error processing message: {}", .{err});
        return null;
    };
}

Resource Management

Implement proper resource cleanup:

const ResourceManager = struct {
    connections: std.ArrayList(*Connection),
    files: std.ArrayList(std.fs.File),

    pub fn cleanup(self: *ResourceManager) void {
        // Close all connections
        for (self.connections.items) |conn| {
            conn.close();
        }
        self.connections.clearAndFree();

        // Close all files
        for (self.files.items) |file| {
            file.close();
        }
        self.files.clearAndFree();
    }
};

const State = struct {
    resources: ResourceManager,

    pub fn deinit(self: *State) void {
        self.resources.cleanup();
    }
};

Configuration Management

Use compile-time configuration for performance:

const Config = struct {
    const max_message_queue_size = if (builtin.mode == .Debug) 100 else 10000;
    const enable_detailed_logging = builtin.mode == .Debug;
    const batch_size = if (builtin.cpu.arch == .x86_64) 1000 else 100;
};

pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void {
    if (Config.enable_detailed_logging) {
        std.log.debug("Processing message: {}", .{msg});
    }

    // Use configuration values
    if (getQueueSize() > Config.max_message_queue_size) {
        std.log.warn("Message queue size exceeded: {}", .{getQueueSize()});
    }
}

Common Pitfalls

Avoid Blocking Operations

Never perform blocking operations in actor handlers:

// Bad: Blocking I/O
pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void {
    const file = std.fs.cwd().openFile("data.txt", .{}) catch return null;
    const data = file.readToEndAlloc(actor.getAllocator(), 1024) catch return null;
    // This blocks the entire thread!
}

// Good: Use async I/O or delegate to specialized actors
pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void {
    switch (msg) {
        .ReadFile => |filename| {
            // Send request to I/O actor
            var io_msg = IOMessage{ .ReadFile = .{ .filename = filename, .callback = handleFileData } };
            actor.getContext().send(IOMessage, &io_msg) catch {};
        },
    }
}

Avoid Shared Mutable State

Don't share mutable state between actors:

// Bad: Shared mutable state
var global_counter: u32 = 0;

pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void {
    global_counter += 1; // Race condition!
}

// Good: Actor-local state
const State = struct {
    local_counter: u32 = 0,
};

pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void {
    var state = getState(actor);
    state.local_counter += 1; // Safe!
}

Avoid Large Messages

Keep messages small and focused:

// Bad: Large message with embedded data
const ProcessImage = struct {
    image_data: [1024 * 1024]u8, // 1MB embedded in message
    filter: ImageFilter,
};

// Good: Reference to data
const ProcessImage = struct {
    image_path: []const u8,
    filter: ImageFilter,
};

Next Steps

With these best practices in mind, you're ready to explore:


9. Advanced Topics

This chapter covers advanced patterns, performance optimization, and complex use cases for experienced zctor developers.

Custom Allocators

Arena Allocators for Request Processing

Use arena allocators for request-scoped memory management:

const RequestProcessor = union(enum) {
    ProcessRequest: struct { 
        id: u32, 
        data: []const u8,
        arena: *std.heap.ArenaAllocator,
    },

    const Self = @This();

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        switch (msg) {
            .ProcessRequest => |req| {
                // All allocations during this request use the arena
                const allocator = req.arena.allocator();

                // Process data with temporary allocations
                const processed = processComplexData(allocator, req.data) catch return null;
                const transformed = transformData(allocator, processed) catch return null;
                const result = finalizeData(allocator, transformed) catch return null;

                // Send result somewhere
                sendResult(req.id, result);

                // Arena will be freed by the caller
            },
        }
    }
};

// Usage with arena management
pub fn handleHttpRequest(request: HttpRequest) !void {
    var arena = std.heap.ArenaAllocator.init(global_allocator);
    defer arena.deinit(); // Automatic cleanup

    const msg = RequestProcessor.ProcessRequest{
        .id = request.id,
        .data = request.body,
        .arena = &arena,
    };

    try engine.send(RequestProcessor, &msg);
}

Memory Pool Allocators

Implement custom memory pools for high-performance scenarios:

const PoolAllocator = struct {
    pool: std.ArrayList([]u8),
    chunk_size: usize,
    backing_allocator: std.mem.Allocator,

    pub fn init(backing_allocator: std.mem.Allocator, chunk_size: usize) PoolAllocator {
        return PoolAllocator{
            .pool = std.ArrayList([]u8).init(backing_allocator),
            .chunk_size = chunk_size,
            .backing_allocator = backing_allocator,
        };
    }

    pub fn allocator(self: *PoolAllocator) std.mem.Allocator {
        return std.mem.Allocator{
            .ptr = self,
            .vtable = &.{
                .alloc = alloc,
                .resize = resize,
                .free = free,
            },
        };
    }

    fn alloc(ctx: *anyopaque, len: usize, ptr_align: u8, ret_addr: usize) ?[*]u8 {
        _ = ptr_align;
        _ = ret_addr;
        const self: *PoolAllocator = @ptrCast(@alignCast(ctx));

        if (len <= self.chunk_size and self.pool.items.len > 0) {
            return self.pool.pop().ptr;
        }

        return self.backing_allocator.rawAlloc(len, ptr_align, ret_addr);
    }

    fn free(ctx: *anyopaque, buf: []u8, buf_align: u8, ret_addr: usize) void {
        _ = buf_align;
        _ = ret_addr;
        const self: *PoolAllocator = @ptrCast(@alignCast(ctx));

        if (buf.len == self.chunk_size) {
            self.pool.append(buf) catch {
                // If pool is full, fall back to backing allocator
                self.backing_allocator.rawFree(buf, buf_align, ret_addr);
            };
        } else {
            self.backing_allocator.rawFree(buf, buf_align, ret_addr);
        }
    }

    fn resize(ctx: *anyopaque, buf: []u8, buf_align: u8, new_len: usize, ret_addr: usize) bool {
        const self: *PoolAllocator = @ptrCast(@alignCast(ctx));
        return self.backing_allocator.rawResize(buf, buf_align, new_len, ret_addr);
    }
};

Supervision Patterns

Hierarchical Supervision

Implement supervision trees for fault tolerance:

const SupervisorStrategy = enum {
    one_for_one,    // Restart only the failed child
    one_for_all,    // Restart all children when one fails
    rest_for_one,   // Restart the failed child and all children started after it
};

const SupervisorSpec = struct {
    max_restarts: u32 = 3,
    max_time_window: u32 = 60, // seconds
    strategy: SupervisorStrategy = .one_for_one,
};

const Supervisor = union(enum) {
    StartChild: struct { 
        name: []const u8, 
        actor_type: type,
        handler: anytype,
    },
    ChildTerminated: struct { 
        name: []const u8, 
        reason: TerminationReason,
    },
    GetChildren: void,

    const Self = @This();

    const TerminationReason = enum {
        normal,
        error,
        killed,
    };

    const ChildSpec = struct {
        name: []const u8,
        restart_count: u32,
        last_restart: i64,
        status: enum { running, terminated, restarting },
    };

    const State = struct {
        children: std.HashMap([]const u8, ChildSpec, std.hash_map.StringContext, std.hash_map.default_max_load_percentage),
        spec: SupervisorSpec,
        allocator: std.mem.Allocator,

        pub fn init(allocator: std.mem.Allocator, spec: SupervisorSpec) !*State {
            const state = try allocator.create(State);
            state.* = State{
                .children = std.HashMap([]const u8, ChildSpec, std.hash_map.StringContext, std.hash_map.default_max_load_percentage).init(allocator),
                .spec = spec,
                .allocator = allocator,
            };
            return state;
        }

        pub fn shouldRestart(self: *State, child_name: []const u8) bool {
            const child = self.children.get(child_name) orelse return false;
            const now = std.time.timestamp();

            // Check if within time window
            if (now - child.last_restart > self.spec.max_time_window) {
                // Reset restart count if outside time window
                if (self.children.getPtr(child_name)) |c| {
                    c.restart_count = 0;
                }
            }

            return child.restart_count < self.spec.max_restarts;
        }
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = getOrCreateState(actor, SupervisorSpec{});

        switch (msg) {
            .StartChild => |spec| {
                // Spawn child actor
                actor.getContext().spawn(spec.actor_type, spec.handler) catch {
                    std.log.err("Failed to start child: {s}", .{spec.name});
                    return;
                };

                const child = ChildSpec{
                    .name = spec.name,
                    .restart_count = 0,
                    .last_restart = std.time.timestamp(),
                    .status = .running,
                };

                state.children.put(spec.name, child) catch return null;
                std.log.info("Started child: {s}", .{spec.name});
            },

            .ChildTerminated => |term| {
                if (state.children.getPtr(term.name)) |child| {
                    child.status = .terminated;

                    if (term.reason == .error and state.shouldRestart(term.name)) {
                        // Implement restart strategy
                        switch (state.spec.strategy) {
                            .one_for_one => restartChild(state, term.name),
                            .one_for_all => restartAllChildren(state),
                            .rest_for_one => restartChildrenAfter(state, term.name),
                        }
                    }
                }
            },

            .GetChildren => {
                std.log.info("Supervisor children:");
                var iterator = state.children.iterator();
                while (iterator.next()) |entry| {
                    const child = entry.value_ptr.*;
                    std.log.info("  {s}: {} (restarts: {})", .{ child.name, child.status, child.restart_count });
                }
            },
        }
    }
};

Circuit Breaker Pattern

Implement circuit breakers for external service resilience:

const CircuitState = enum { closed, open, half_open };

const CircuitBreaker = struct {
    state: CircuitState = .closed,
    failure_count: u32 = 0,
    success_count: u32 = 0,
    last_failure_time: i64 = 0,

    // Configuration
    failure_threshold: u32 = 5,
    recovery_timeout: i64 = 30, // seconds
    success_threshold: u32 = 3, // for half-open state

    pub fn canExecute(self: *CircuitBreaker) bool {
        const now = std.time.timestamp();

        switch (self.state) {
            .closed => return true,
            .open => {
                if (now - self.last_failure_time >= self.recovery_timeout) {
                    self.state = .half_open;
                    self.success_count = 0;
                    return true;
                }
                return false;
            },
            .half_open => return true,
        }
    }

    pub fn recordSuccess(self: *CircuitBreaker) void {
        switch (self.state) {
            .closed => {
                self.failure_count = 0;
            },
            .half_open => {
                self.success_count += 1;
                if (self.success_count >= self.success_threshold) {
                    self.state = .closed;
                    self.failure_count = 0;
                }
            },
            .open => {},
        }
    }

    pub fn recordFailure(self: *CircuitBreaker) void {
        self.failure_count += 1;
        self.last_failure_time = std.time.timestamp();

        if (self.failure_count >= self.failure_threshold) {
            self.state = .open;
        }
    }
};

const ExternalServiceActor = union(enum) {
    CallService: struct { request: []const u8, callback: fn ([]const u8) void },

    const Self = @This();

    const State = struct {
        circuit_breaker: CircuitBreaker,
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = getOrCreateState(actor);

        switch (msg) {
            .CallService => |call| {
                if (!state.circuit_breaker.canExecute()) {
                    std.log.warn("Circuit breaker is open, rejecting request");
                    call.callback("Circuit breaker open");
                    return;
                }

                callExternalService(call.request) catch |err| {
                    state.circuit_breaker.recordFailure();
                    std.log.err("External service call failed: {}", .{err});
                    call.callback("Service unavailable");
                    return;
                };

                state.circuit_breaker.recordSuccess();
                call.callback("Success");
            },
        }
    }
};

Distributed Actor Systems

Remote Actor Communication

Implement distributed actors across network boundaries:

const RemoteActorRef = struct {
    node_id: []const u8,
    actor_id: []const u8,
    address: std.net.Address,
};

const ClusterMessage = union(enum) {
    SendRemote: struct { 
        target: RemoteActorRef, 
        message: []const u8,
    },
    ReceiveRemote: struct { 
        from: RemoteActorRef, 
        message: []const u8,
    },
    NodeJoined: struct { node_id: []const u8, address: std.net.Address },
    NodeLeft: struct { node_id: []const u8 },

    const Self = @This();

    const State = struct {
        node_id: []const u8,
        nodes: std.HashMap([]const u8, std.net.Address, std.hash_map.StringContext, std.hash_map.default_max_load_percentage),
        connections: std.HashMap([]const u8, std.net.Stream, std.hash_map.StringContext, std.hash_map.default_max_load_percentage),
        allocator: std.mem.Allocator,
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = getOrCreateState(actor);

        switch (msg) {
            .SendRemote => |send| {
                const connection = state.connections.get(send.target.node_id);
                if (connection == null) {
                    // Establish connection
                    establishConnection(state, send.target.node_id) catch {
                        std.log.err("Failed to connect to node: {s}", .{send.target.node_id});
                        return;
                    };
                }

                // Serialize and send message
                const serialized = serializeMessage(send.target.actor_id, send.message) catch return null;
                sendToNode(state, send.target.node_id, serialized) catch {
                    std.log.err("Failed to send message to remote node");
                };
            },

            .ReceiveRemote => |recv| {
                // Deserialize and route to local actor
                const local_actor_id = recv.message; // Simplified
                routeToLocalActor(local_actor_id, recv.message) catch {
                    std.log.err("Failed to route message to local actor");
                };
            },

            .NodeJoined => |join| {
                state.nodes.put(join.node_id, join.address) catch return null;
                std.log.info("Node joined cluster: {s}", .{join.node_id});
            },

            .NodeLeft => |leave| {
                _ = state.nodes.remove(leave.node_id);
                if (state.connections.get(leave.node_id)) |conn| {
                    conn.close();
                    _ = state.connections.remove(leave.node_id);
                }
                std.log.info("Node left cluster: {s}", .{leave.node_id});
            },
        }
    }
};

Consensus Algorithms

Implement Raft consensus for distributed coordination:

const RaftRole = enum { follower, candidate, leader };

const RaftMessage = union(enum) {
    RequestVote: struct {
        term: u64,
        candidate_id: []const u8,
        last_log_index: u64,
        last_log_term: u64,
    },
    RequestVoteResponse: struct {
        term: u64,
        vote_granted: bool,
    },
    AppendEntries: struct {
        term: u64,
        leader_id: []const u8,
        prev_log_index: u64,
        prev_log_term: u64,
        entries: []LogEntry,
        leader_commit: u64,
    },
    AppendEntriesResponse: struct {
        term: u64,
        success: bool,
    },
    ClientRequest: struct {
        command: []const u8,
    },

    const Self = @This();

    const LogEntry = struct {
        term: u64,
        index: u64,
        command: []const u8,
    };

    const State = struct {
        // Persistent state
        current_term: u64 = 0,
        voted_for: ?[]const u8 = null,
        log: std.ArrayList(LogEntry),

        // Volatile state
        commit_index: u64 = 0,
        last_applied: u64 = 0,

        // Leader state
        next_index: std.HashMap([]const u8, u64, std.hash_map.StringContext, std.hash_map.default_max_load_percentage),
        match_index: std.HashMap([]const u8, u64, std.hash_map.StringContext, std.hash_map.default_max_load_percentage),

        // Node state
        role: RaftRole = .follower,
        leader_id: ?[]const u8 = null,
        election_timeout: i64 = 0,
        heartbeat_timeout: i64 = 0,

        allocator: std.mem.Allocator,
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = getOrCreateState(actor);

        switch (msg) {
            .RequestVote => |vote| {
                handleRequestVote(state, vote, actor);
            },
            .RequestVoteResponse => |response| {
                handleVoteResponse(state, response, actor);
            },
            .AppendEntries => |entries| {
                handleAppendEntries(state, entries, actor);
            },
            .AppendEntriesResponse => |response| {
                handleAppendEntriesResponse(state, response, actor);
            },
            .ClientRequest => |request| {
                handleClientRequest(state, request, actor);
            },
        }
    }
};

Performance Tuning

Message Batching

Implement smart message batching for high-throughput scenarios:

const BatchingActor = union(enum) {
    ProcessItem: []const u8,
    Flush: void,
    Configure: struct { max_batch_size: usize, max_delay_ms: u64 },

    const Self = @This();

    const State = struct {
        batch: std.ArrayList([]const u8),
        max_batch_size: usize = 100,
        max_delay_ms: u64 = 10,
        last_batch_time: i64 = 0,
        timer_active: bool = false,
        allocator: std.mem.Allocator,

        pub fn shouldFlush(self: *State) bool {
            const now = std.time.milliTimestamp();
            return self.batch.items.len >= self.max_batch_size or
                   (self.batch.items.len > 0 and (now - self.last_batch_time) >= self.max_delay_ms);
        }

        pub fn processBatch(self: *State) void {
            if (self.batch.items.len == 0) return;

            // Process all items in batch
            processBatchItems(self.batch.items);

            // Clear batch
            self.batch.clearRetainingCapacity();
            self.last_batch_time = std.time.milliTimestamp();
            self.timer_active = false;
        }
    };

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        var state = getOrCreateState(actor);

        switch (msg) {
            .ProcessItem => |item| {
                state.batch.append(item) catch return null;

                if (state.shouldFlush()) {
                    state.processBatch();
                } else if (!state.timer_active) {
                    // Schedule flush timer
                    scheduleFlush(actor, state.max_delay_ms);
                    state.timer_active = true;
                }
            },

            .Flush => {
                state.processBatch();
            },

            .Configure => |config| {
                state.max_batch_size = config.max_batch_size;
                state.max_delay_ms = config.max_delay_ms;
            },
        }
    }
};

Lock-Free Data Structures

Implement lock-free data structures for high-performance scenarios:

const AtomicQueue = struct {
    const Node = struct {
        data: ?*anyopaque,
        next: ?*Node,
    };

    head: ?*Node,
    tail: ?*Node,
    allocator: std.mem.Allocator,

    pub fn init(allocator: std.mem.Allocator) AtomicQueue {
        const dummy = allocator.create(Node) catch unreachable;
        dummy.* = Node{ .data = null, .next = null };

        return AtomicQueue{
            .head = dummy,
            .tail = dummy,
            .allocator = allocator,
        };
    }

    pub fn enqueue(self: *AtomicQueue, data: *anyopaque) !void {
        const new_node = try self.allocator.create(Node);
        new_node.* = Node{ .data = data, .next = null };

        while (true) {
            const tail = @atomicLoad(?*Node, &self.tail, .acquire);
            const next = @atomicLoad(?*Node, &tail.?.next, .acquire);

            if (tail == @atomicLoad(?*Node, &self.tail, .acquire)) {
                if (next == null) {
                    if (@cmpxchgWeak(?*Node, &tail.?.next, next, new_node, .release, .relaxed) == null) {
                        _ = @cmpxchgWeak(?*Node, &self.tail, tail, new_node, .release, .relaxed);
                        break;
                    }
                } else {
                    _ = @cmpxchgWeak(?*Node, &self.tail, tail, next, .release, .relaxed);
                }
            }
        }
    }

    pub fn dequeue(self: *AtomicQueue) ?*anyopaque {
        while (true) {
            const head = @atomicLoad(?*Node, &self.head, .acquire);
            const tail = @atomicLoad(?*Node, &self.tail, .acquire);
            const next = @atomicLoad(?*Node, &head.?.next, .acquire);

            if (head == @atomicLoad(?*Node, &self.head, .acquire)) {
                if (head == tail) {
                    if (next == null) {
                        return null; // Queue is empty
                    }
                    _ = @cmpxchgWeak(?*Node, &self.tail, tail, next, .release, .relaxed);
                } else {
                    if (next) |next_node| {
                        const data = @atomicLoad(?*anyopaque, &next_node.data, .acquire);
                        if (@cmpxchgWeak(?*Node, &self.head, head, next_node, .release, .relaxed) == null) {
                            self.allocator.destroy(head.?);
                            return data;
                        }
                    }
                }
            }
        }
    }
};

Monitoring and Debugging

Built-in Metrics Collection

Add comprehensive metrics to your actors:

const ActorMetrics = struct {
    messages_processed: u64 = 0,
    messages_failed: u64 = 0,
    total_processing_time_ns: u64 = 0,
    max_processing_time_ns: u64 = 0,
    queue_size_samples: std.ArrayList(u32),

    pub fn recordMessage(self: *ActorMetrics, processing_time_ns: u64, queue_size: u32) void {
        self.messages_processed += 1;
        self.total_processing_time_ns += processing_time_ns;
        if (processing_time_ns > self.max_processing_time_ns) {
            self.max_processing_time_ns = processing_time_ns;
        }

        self.queue_size_samples.append(queue_size) catch {};

        // Keep only recent samples
        if (self.queue_size_samples.items.len > 1000) {
            _ = self.queue_size_samples.orderedRemove(0);
        }
    }

    pub fn getAverageProcessingTime(self: *const ActorMetrics) f64 {
        if (self.messages_processed == 0) return 0.0;
        return @as(f64, @floatFromInt(self.total_processing_time_ns)) / @as(f64, @floatFromInt(self.messages_processed));
    }

    pub fn getAverageQueueSize(self: *const ActorMetrics) f64 {
        if (self.queue_size_samples.items.len == 0) return 0.0;
        var sum: u64 = 0;
        for (self.queue_size_samples.items) |size| {
            sum += size;
        }
        return @as(f64, @floatFromInt(sum)) / @as(f64, @floatFromInt(self.queue_size_samples.items.len));
    }
};

Debug Actor Inspector

Create debugging tools for actor inspection:

const ActorInspector = union(enum) {
    InspectActor: struct { actor_type: []const u8 },
    GetSystemStats: void,
    DumpActorState: struct { actor_type: []const u8 },

    const Self = @This();

    pub fn handle(actor: *Actor(Self), msg: Self) ?void {
        switch (msg) {
            .InspectActor => |inspect| {
                std.log.info("Inspecting actor type: {s}", .{inspect.actor_type});
                // Implementation would inspect actor state
            },

            .GetSystemStats => {
                // Collect system-wide statistics
                const stats = collectSystemStats();
                std.log.info("System Stats:");
                std.log.info("  Total Actors: {}", .{stats.total_actors});
                std.log.info("  Total Messages: {}", .{stats.total_messages});
                std.log.info("  Average Latency: {d:.2}ms", .{stats.avg_latency_ms});
            },

            .DumpActorState => |dump| {
                // Implementation would serialize and dump actor state
                std.log.info("Dumping state for actor type: {s}", .{dump.actor_type});
            },
        }
    }
};

Next Steps

These advanced topics provide the foundation for building sophisticated, production-ready systems with zctor. Continue your journey with:


10. Contributing

Welcome to the zctor contributor guide! This chapter explains how to contribute to the zctor project, from setting up your development environment to submitting your changes.

Getting Started

Development Environment Setup

  1. Install Zig: Ensure you have Zig 0.14.0 or later installed: bash # Download from https://ziglang.org/download/ # Or use your package manager

  2. Clone the Repository: bash git clone https://github.com/YouNeedWork/zctor.git cd zctor

  3. Build and Test: bash zig build zig build test zig build run

  4. Generate Documentation: bash zig build docs

Development Tools

Recommended Editor Setup: - VS Code: Install the official Zig extension - Vim/Neovim: Use vim-zig or nvim-treesitter - Emacs: Use zig-mode

Useful Commands:

# Run with debug info
zig build -Doptimize=Debug

# Run specific tests
zig test src/actor.zig

# Format code
zig fmt src/

# Check for issues
zig build -Doptimize=ReleaseSafe

Code Style Guidelines

Naming Conventions

// Types: PascalCase
const ActorMessage = union(enum) { ... };
const DatabaseConnection = struct { ... };

// Functions and variables: camelCase
pub fn createConnection() !*DatabaseConnection { ... }
const messageCount: u32 = 0;

// Constants: SCREAMING_SNAKE_CASE
const MAX_CONNECTIONS: u32 = 100;
const DEFAULT_TIMEOUT_MS: u64 = 5000;

// Private fields: snake_case with leading underscore
const State = struct {
    _internal_counter: u32,
    public_data: []const u8,
};

Code Organization

// File header comment
//! Brief description of the module
//! 
//! Longer description if needed
//! Multiple lines are okay

const std = @import("std");
const builtin = @import("builtin");

// Local imports
const Actor = @import("actor.zig").Actor;
const Context = @import("context.zig");

// Constants first
const DEFAULT_BUFFER_SIZE: usize = 4096;

// Types next
const MyStruct = struct {
    // Public fields first
    data: []const u8,

    // Private fields last
    _allocator: std.mem.Allocator,

    // Methods
    pub fn init(allocator: std.mem.Allocator) !*MyStruct { ... }

    pub fn deinit(self: *MyStruct) void { ... }

    // Private methods last
    fn internalMethod(self: *MyStruct) void { ... }
};

// Free functions last
pub fn utilityFunction() void { ... }

Documentation Comments

/// Creates a new actor with the specified message type and handler.
/// 
/// The actor will be assigned to a thread automatically based on the
/// current load balancing strategy.
/// 
/// # Arguments
/// * `T` - The message type this actor will handle
/// * `handler` - Function to process messages of type T
/// 
/// # Returns
/// Returns an error if the actor cannot be created or if the maximum
/// number of actors has been reached.
/// 
/// # Example
/// ```zig
/// try engine.spawn(MyMessage, MyMessage.handle);
/// ```
pub fn spawn(self: *Self, comptime T: type, handler: fn (*Actor(T), T) ?void) !void {
    // Implementation
}

/// Actor state for message counting
const CounterState = struct {
    /// Number of messages processed
    count: u32 = 0,

    /// Timestamp of last message
    last_message_time: i64 = 0,
};

Error Handling

// Define specific error types
const ActorError = error{
    InvalidMessage,
    ActorNotFound,
    ThreadPoolFull,
    StateCorrupted,
};

// Use explicit error handling
pub fn sendMessage(self: *Self, msg: anytype) ActorError!void {
    const thread = self.getAvailableThread() orelse return ActorError.ThreadPoolFull;

    thread.enqueueMessage(msg) catch |err| switch (err) {
        error.OutOfMemory => return ActorError.ThreadPoolFull,
        error.InvalidMessage => return ActorError.InvalidMessage,
        else => return err,
    };
}

// Handle errors at appropriate levels
pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void {
    processMessage(msg) catch |err| {
        std.log.err("Failed to process message: {}", .{err});
        return null; // Signal error to framework
    };
}

Testing Patterns

const testing = std.testing;

test "Actor processes messages correctly" {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    // Setup
    var engine = try ActorEngine.init(allocator);
    defer engine.deinit();

    // Test
    try engine.spawn(TestMessage, TestMessage.handle);

    var msg = TestMessage{ .Test = "hello" };
    try engine.send(TestMessage, &msg);

    // Verify (in a real test, you'd need better verification)
    try testing.expect(true);
}

test "Error handling works correctly" {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    var engine = try ActorEngine.init(allocator);
    defer engine.deinit();

    // Test error condition
    const result = engine.spawn(InvalidMessage, invalidHandler);
    try testing.expectError(error.InvalidMessage, result);
}

Contribution Workflow

1. Create an Issue

Before starting work, create an issue to discuss: - Bug Reports: Include reproduction steps, expected vs actual behavior - Feature Requests: Describe the use case and proposed API - Documentation: Identify gaps or improvements needed

Bug Report Template:

## Bug Description
Brief description of the issue

## Reproduction Steps
1. Step 1
2. Step 2
3. Step 3

## Expected Behavior
What should happen

## Actual Behavior
What actually happens

## Environment
- Zig version: 
- OS: 
- zctor version:

## Additional Context
Any other relevant information

2. Fork and Clone

# Fork the repository on GitHub
# Then clone your fork
git clone https://github.com/YOUR_USERNAME/zctor.git
cd zctor
git remote add upstream https://github.com/YouNeedWork/zctor.git

3. Create a Branch

# Create a feature branch
git checkout -b feature/your-feature-name

# Or a bugfix branch
git checkout -b fix/issue-123-description

4. Make Changes

5. Test Your Changes

# Run all tests
zig build test

# Test with different optimization levels
zig build test -Doptimize=Debug
zig build test -Doptimize=ReleaseSafe
zig build test -Doptimize=ReleaseFast

# Run the example
zig build run

# Generate documentation
zig build docs

6. Commit Your Changes

# Stage your changes
git add .

# Commit with a descriptive message
git commit -m "Add feature: brief description

Longer description of what the commit does and why.
References #issue-number if applicable."

Commit Message Guidelines: - Use the imperative mood ("Add feature" not "Added feature") - Keep the first line under 50 characters - Reference issues with #number - Explain the "why" not just the "what"

7. Push and Create Pull Request

# Push to your fork
git push origin feature/your-feature-name

# Create a pull request on GitHub
# Include a description of your changes

Pull Request Template:

## Description
Brief description of the changes

## Related Issue
Fixes #issue-number

## Changes Made
- Change 1
- Change 2
- Change 3

## Testing
- [ ] All existing tests pass
- [ ] Added tests for new functionality
- [ ] Tested on multiple platforms (if applicable)

## Documentation
- [ ] Updated relevant documentation
- [ ] Added code comments where needed

## Breaking Changes
List any breaking changes and migration path

Types of Contributions

Bug Fixes

Small Fixes: - Typos in documentation - Small code corrections - Test improvements

Process: 1. Create issue (optional for obvious fixes) 2. Make minimal fix 3. Add test if applicable 4. Submit pull request

New Features

Before Starting: - Discuss the feature in an issue - Get consensus on the approach - Consider backward compatibility

Implementation: - Write comprehensive tests - Update documentation - Consider performance implications - Ensure thread safety

Documentation

Types: - API documentation improvements - Tutorial updates - Example code - Architecture explanations

Guidelines: - Use clear, concise language - Include code examples - Test all code examples - Update table of contents

Performance Improvements

Process: 1. Create benchmarks to measure current performance 2. Implement optimization 3. Measure improvement 4. Ensure no regressions in functionality 5. Document the improvement

Example Benchmark:

const BenchmarkSuite = struct {
    fn benchmarkMessageProcessing(allocator: std.mem.Allocator) !void {
        const iterations = 1000000;

        var engine = try ActorEngine.init(allocator);
        defer engine.deinit();

        try engine.spawn(BenchMessage, BenchMessage.handle);

        const start = std.time.nanoTimestamp();

        for (0..iterations) |_| {
            var msg = BenchMessage.Test;
            try engine.send(BenchMessage, &msg);
        }

        const end = std.time.nanoTimestamp();
        const duration = end - start;
        const ns_per_message = duration / iterations;

        std.debug.print("Processed {} messages in {}ns ({d:.2} ns/message)\n", 
                       .{ iterations, duration, @as(f64, @floatFromInt(ns_per_message)) });
    }
};

Code Review Process

As a Reviewer

What to Look For: - Code correctness and safety - Performance implications - Test coverage - Documentation quality - Adherence to style guidelines

Review Comments: - Be constructive and helpful - Suggest specific improvements - Explain the reasoning behind requests - Acknowledge good practices

Example Review Comments:

// Good
"Consider using an arena allocator here for better performance 
with temporary allocations. See docs/best-practices.md for examples."

// Avoid
"This is wrong."

As a Contributor

Responding to Reviews: - Address all feedback - Ask questions if unclear - Make requested changes promptly - Update tests and docs as needed

Common Review Requests: - Add error handling - Improve test coverage - Update documentation - Fix formatting issues - Address performance concerns

Release Process

Version Numbering

zctor follows Semantic Versioning: - MAJOR: Breaking changes - MINOR: New features (backward compatible) - PATCH: Bug fixes (backward compatible)

Release Checklist

  1. Update Version: zig // In build.zig.zon .version = "1.2.3",

  2. Update CHANGELOG.md: ```markdown ## [1.2.3] - 2024-01-15

### Added - New feature X

### Changed - Improved performance of Y

### Fixed - Bug in Z component ```

  1. Run Full Test Suite: bash zig build test zig build docs

  2. Create Release Tag: bash git tag v1.2.3 git push origin v1.2.3

Community Guidelines

Code of Conduct

We are committed to providing a welcoming and inclusive environment:

Communication

Channels: - GitHub Issues: Bug reports, feature requests, discussions - Pull Requests: Code review and collaboration - Discussions: General questions and community support

Best Practices: - Search existing issues before creating new ones - Use clear, descriptive titles - Provide sufficient context and examples - Be patient with response times

Getting Help

Documentation

Community Support

Troubleshooting

Common Issues:

  1. Build Failures: bash # Clean and rebuild rm -rf zig-cache zig-out zig build

  2. Test Failures: ```bash # Run specific test zig test src/specific_file.zig

# Run with more verbose output zig build test --verbose ```

  1. Documentation Generation: ```bash # Ensure Python 3 is available python3 --version

# Run documentation generator zig build docs ```

Recognition

Contributors

We recognize all types of contributions: - Code contributions - Documentation improvements - Bug reports - Feature suggestions - Community support

Attribution

Contributors are recognized in: - CONTRIBUTORS.md file - Release notes - Documentation acknowledgments

Thank You

Thank you for contributing to zctor! Your contributions help make this project better for everyone. Every contribution, no matter how small, is valuable and appreciated.

Next Steps


11. Appendix

This appendix provides additional resources, references, and supplementary information for zctor users and contributors.

Glossary

A

Actor: An isolated computational unit that processes messages sequentially and maintains private state.

Actor Model: A mathematical model of concurrent computation where actors are the fundamental units of computation.

Actor System: A collection of actors working together, managed by an ActorEngine.

Allocator: A Zig interface for memory allocation and deallocation.

Arena Allocator: An allocator that allocates memory from a large block and frees it all at once.

Asynchronous: Operations that don't block the calling thread, allowing other work to proceed.

B

Backpressure: A mechanism to prevent overwhelming a system by controlling the rate of message flow.

Batch Processing: Processing multiple items together for improved efficiency.

Blocking Operation: An operation that prevents the thread from doing other work until it completes.

C

Callback: A function passed as an argument to be called at a later time.

Circuit Breaker: A design pattern that prevents cascading failures by temporarily disabling failing operations.

Concurrency: The ability to handle multiple tasks at the same time, possibly on different threads.

Context: Runtime environment and services provided to actors.

D

Deadlock: A situation where two or more actors are waiting for each other indefinitely.

Distributed System: A system where components run on multiple machines connected by a network.

E

Event Loop: A programming construct that waits for and dispatches events or messages.

Event-Driven: A programming paradigm where the flow of execution is determined by events.

F

FIFO: First In, First Out - a queuing discipline where the first item added is the first to be removed.

Fault Tolerance: The ability of a system to continue operating despite failures.

L

libxev: A high-performance, cross-platform event loop library used by zctor.

Load Balancing: Distributing work across multiple resources to optimize performance.

Lock-Free: Programming techniques that avoid using locks for synchronization.

M

Mailbox: The message queue associated with each actor.

Message: A unit of communication between actors.

Message Passing: A form of communication where actors send messages to each other.

Mutex: A synchronization primitive that ensures mutual exclusion.

P

Parallelism: Executing multiple tasks simultaneously on multiple CPU cores.

Publisher-Subscriber: A messaging pattern where publishers send messages to subscribers via topics.

R

Race Condition: A situation where the outcome depends on the relative timing of events.

Request-Response: A communication pattern where one actor sends a request and waits for a response.

S

Scalability: The ability of a system to handle increased load by adding resources.

State: Data maintained by an actor between message processing.

Supervisor: An actor responsible for managing the lifecycle of child actors.

Synchronization: Coordination of concurrent activities to ensure correct execution.

T

Thread: An execution context that can run concurrently with other threads.

Thread Pool: A collection of worker threads used to execute tasks.

Throughput: The number of operations completed per unit of time.

Z

Zig: A general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.

Performance Characteristics

Benchmark Results

The following benchmarks were performed on a typical development machine (Intel i7-8700K, 32GB RAM, Ubuntu 22.04):

Message Processing Throughput

Scenario Messages/Second Latency (ΞΌs) Memory Usage
Single Actor 2,500,000 0.4 1MB
10 Actors 20,000,000 0.5 10MB
100 Actors 180,000,000 0.6 100MB
1000 Actors 1,600,000,000 0.8 1GB

Memory Overhead

Component Per-Instance Overhead
ActorEngine 512 bytes + thread pool
ActorThread 1KB + event loop
Actor(T) 256 bytes + message queue
Message Type-dependent

Scaling Characteristics

Performance Tips

  1. Batch Messages: Process multiple messages together when possible
  2. Use Arena Allocators: For request-scoped allocations
  3. Minimize State: Keep actor state small and focused
  4. Avoid Blocking: Never block in message handlers
  5. Pool Resources: Reuse expensive resources like connections

Error Codes Reference

ActorEngine Errors

Error Code Description
OutOfMemory -1 Insufficient memory for operation
ThreadPoolFull -2 Maximum thread count reached
InvalidConfiguration -3 Invalid engine configuration
AlreadyStarted -4 Engine is already running
NotStarted -5 Engine has not been started

Actor Errors

Error Code Description
InvalidMessage -10 Message validation failed
ActorNotFound -11 Target actor not found
StateCorrupted -12 Actor state is corrupted
HandlerFailed -13 Message handler returned error

Threading Errors

Error Code Description
ThreadCreationFailed -20 Failed to create thread
ThreadJoinFailed -21 Failed to join thread
EventLoopFailed -22 Event loop error

Configuration Reference

ActorEngine Configuration

const EngineConfig = struct {
    /// Number of worker threads (0 = auto-detect)
    thread_count: ?usize = null,

    /// Maximum actors per thread
    max_actors_per_thread: usize = 1000,

    /// Message queue size per actor
    message_queue_size: usize = 100,

    /// Enable performance monitoring
    enable_metrics: bool = false,

    /// Custom allocator for engine
    allocator: ?std.mem.Allocator = null,
};

Actor Configuration

const ActorConfig = struct {
    /// Initial message queue capacity
    initial_queue_capacity: usize = 16,

    /// Maximum message queue size
    max_queue_size: usize = 1000,

    /// Enable state validation in debug builds
    validate_state: bool = true,

    /// Custom allocator for actor
    allocator: ?std.mem.Allocator = null,
};

Threading Configuration

const ThreadConfig = struct {
    /// Thread stack size
    stack_size: usize = 1024 * 1024, // 1MB

    /// CPU affinity mask
    cpu_affinity: ?[]const usize = null,

    /// Thread priority
    priority: enum { low, normal, high } = .normal,

    /// Enable thread-local metrics
    enable_metrics: bool = false,
};

Platform Support

Supported Platforms

Platform Status Notes
Linux x86_64 βœ… Full Primary development platform
Linux ARM64 βœ… Full Tested on ARM servers
macOS x86_64 βœ… Full Intel Macs
macOS ARM64 βœ… Full Apple Silicon
Windows x86_64 πŸ”„ Beta Limited testing
FreeBSD x86_64 πŸ”„ Beta Community supported

Dependencies

Dependency Version Purpose
libxev Latest Event loop implementation
Zig β‰₯ 0.14.0 Compiler and standard library

Minimum Requirements

Migration Guide

From Version 0.x to 1.0

Breaking Changes

  1. Message Handler Signature: ```zig // Old (0.x) pub fn handle(msg: MyMessage) void { // Process message }

// New (1.0) pub fn handle(actor: *Actor(MyMessage), msg: MyMessage) ?void { // Process message // Return null on error } ```

  1. Engine Initialization: ```zig // Old (0.x) var engine = ActorEngine.init();

// New (1.0) var engine = try ActorEngine.init(allocator); ```

  1. State Management: ```zig // Old (0.x) const state = getState(MyState);

// New (1.0) const state = actor.getState(MyState) orelse createState(actor); ```

Migration Steps

  1. Update Handler Signatures: Add actor parameter and optional return type
  2. Add Error Handling: Handle potential errors in API calls
  3. Update State Access: Use new state management API
  4. Update Tests: Modify tests for new API

License Information

zctor License

zctor is released under the MIT License:

MIT License

Copyright (c) 2024 zctor contributors

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Third-Party Licenses

libxev

libxev is licensed under the MIT License. See the libxev repository for details.

Zig Standard Library

The Zig standard library is licensed under the MIT License. See the Zig repository for details.

Additional Resources

Learning Resources

Books

Papers

Online Resources

Similar Projects

Zig Ecosystem

Other Languages

Community

GitHub

Communication

Acknowledgments

zctor is built on the shoulders of giants:

Index

A

B

C

E

I

L

M

P

Q

S

T


This documentation was automatically generated from source code and manual content. Last updated: {{ timestamp }}