snowflake-rs

Snowflake-Me

English 简体中文

Crates.io Docs.rs Build License

A high-performance, highly concurrent, distributed Snowflake ID generator in Rust.

This implementation is lock-free, designed for maximum throughput and minimum latency on multi-core CPUs.

Highlights

Snowflake ID Structure

The generated ID is a 64-bit unsigned integer (u64) with the following default structure:

+-------------------------------------------------------------------------------------------------+
| 1 Bit (Unused, Sign Bit) | 41 Bits (Timestamp, ms) | 5 Bits (Data Center ID) | 5 Bits (Machine ID) | 12 Bits (Sequence) |
+-------------------------------------------------------------------------------------------------+

Note: The bit lengths of all components are customizable via the Builder, but their sum must be 63.

Quick Start

1. Add Dependency

Add this library to your Cargo.toml:

[dependencies]
snowflake-me = "0.4.0" # Please use the latest version

To enable the IP address fallback feature, enable the ip-fallback feature:

[dependencies]
snowflake-me = { version = "0.4.0", features = ["ip-fallback"] }

2. Basic Usage

use snowflake_me::Snowflake;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create a generator with the default configuration.
    // Note: This requires the `ip-fallback` feature to auto-detect machine and data center IDs.
    let sf = Snowflake::new()?;

    // Generate a unique ID
    let id = sf.next_id()?;
    println!("Generated Snowflake ID: {}", id);

    Ok(())
}

3. Multi-threaded Usage

Snowflake instances can be efficiently cloned and shared between threads.

use snowflake_me::Snowflake;
use std::thread;
use std::sync::Arc;
use std::collections::HashSet;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Manually configure machine_id and data_center_id using the Builder.
    // This is the recommended approach for production environments.
    let sf = Snowflake::builder()
        .machine_id(&|| Ok(10))
        .data_center_id(&|| Ok(5))
        .finalize()?;

    let sf_arc = Arc::new(sf);
    let mut handles = vec![];

    for _ in 0..10 {
        let sf_clone = Arc::clone(&sf_arc);
        let handle = thread::spawn(move || {
            let mut ids = Vec::new();
            for _ in 0..10000 {
                ids.push(sf_clone.next_id().unwrap());
            }
            ids
        });
        handles.push(handle);
    }

    let mut all_ids = HashSet::new();
    for handle in handles {
        let ids = handle.join().unwrap();
        for id in ids {
            // Verify that all IDs are unique
            assert!(all_ids.insert(id), "Found duplicate ID: {}", id);
        }
    }

    println!("Successfully generated {} unique IDs across 10 threads.", all_ids.len());
    Ok(())
}

4. Decomposing an ID

You can decompose a Snowflake ID back into its components for debugging or analysis.

use snowflake_me::{Snowflake, DecomposedSnowflake};

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Ensure you use the same bit length configuration as when the ID was generated.
    let bit_len_time = 41;
    let bit_len_sequence = 12;
    let bit_len_data_center_id = 5;
    let bit_len_machine_id = 5;

    let sf = Snowflake::builder()
        .bit_len_time(bit_len_time)
        .bit_len_sequence(bit_len_sequence)
        .bit_len_data_center_id(bit_len_data_center_id)
        .bit_len_machine_id(bit_len_machine_id)
        .machine_id(&|| Ok(15))
        .data_center_id(&|| Ok(7))
        .finalize()?;

    let id = sf.next_id()?;
    let decomposed = DecomposedSnowflake::decompose(
        id,
        bit_len_time,
        bit_len_sequence,
        bit_len_data_center_id,
        bit_len_machine_id,
    );

    println!("ID: {}", decomposed.id);
    println!("Time: {}", decomposed.time);
    println!("Data Center ID: {}", decomposed.data_center_id);
    println!("Machine ID: {}", decomposed.machine_id);
    println!("Sequence: {}", decomposed.sequence);

    assert_eq!(decomposed.machine_id, 15);
    assert_eq!(decomposed.data_center_id, 7);

    Ok(())
}

Contributing

Issues and Pull Requests are welcome.

License

This project is dual-licensed under the MIT and Apache 2.0 licenses.