Formatted docs with rstfmt

This commit is contained in:
2025-05-28 17:23:04 +01:00
parent c2902166dc
commit 38ae2fa1c7
8 changed files with 360 additions and 312 deletions

View File

@@ -1,62 +1,66 @@
#############
Development
===========
#############
This guide covers setting up a development environment and contributing to ``read_aconity_layers``.
This guide covers setting up a development environment and contributing
to ``read_aconity_layers``.
*******************
Development Setup
-----------------
*******************
Prerequisites
~~~~~~~~~~~~~
=============
* Rust 1.70+ with Cargo
* Python 3.11+
* Poetry
* Git
- Rust 1.70+ with Cargo
- Python 3.11+
- Poetry
- Git
Environment Setup
~~~~~~~~~~~~~~~~~
=================
1. **Clone and setup the repository:**
#. **Clone and setup the repository:**
.. code-block:: bash
.. code:: bash
git clone https://github.com/Cian-H/read_aconity_layers.git
cd read_aconity_layers
2. **Install dependencies:**
#. **Install dependencies:**
.. code-block:: bash
.. code:: bash
poetry install --with dev,docs
3. **Setup pre-commit hooks:**
#. **Setup pre-commit hooks:**
.. code-block:: bash
.. code:: bash
poetry run pre-commit install
4. **Build the Rust extension:**
#. **Build the Rust extension:**
.. code-block:: bash
.. code:: bash
poetry run maturin develop
************************
Code Style and Quality
----------------------
************************
This project uses several tools to maintain code quality:
Python Code
~~~~~~~~~~~
===========
* **Ruff**: For linting and formatting
* **MyPy**: For type checking
* **Pytest**: For testing
- **Ruff**: For linting and formatting
- **MyPy**: For type checking
- **Pytest**: For testing
Run quality checks:
.. code-block:: bash
.. code:: bash
# Format and lint Python code
poetry run ruff format .
@@ -69,15 +73,15 @@ Run quality checks:
poetry run pytest
Rust Code
~~~~~~~~~
=========
* **rustfmt**: For formatting
* **clippy**: For linting
* **cargo test**: For testing
- **rustfmt**: For formatting
- **clippy**: For linting
- **cargo test**: For testing
Run quality checks:
.. code-block:: bash
.. code:: bash
# Format Rust code
cargo fmt
@@ -88,15 +92,17 @@ Run quality checks:
# Run Rust tests
cargo test
*********
Testing
-------
*********
The project includes comprehensive tests for both Python and Rust components.
The project includes comprehensive tests for both Python and Rust
components.
Running Tests
~~~~~~~~~~~~~
=============
.. code-block:: bash
.. code:: bash
# Run all tests
poetry run pytest
@@ -108,32 +114,34 @@ Running Tests
cargo test
Test Structure
~~~~~~~~~~~~~~
==============
* **Python tests**: Located in ``tests/`` directory
* **Rust tests**: Integrated into ``src/rust_fn/mod.rs``
* **Property-based tests**: Uses ``arbtest`` for Rust property testing
* **Regression tests**: Validates against known good outputs
- **Python tests**: Located in ``tests/`` directory
- **Rust tests**: Integrated into ``src/rust_fn/mod.rs``
- **Property-based tests**: Uses ``arbtest`` for Rust property testing
- **Regression tests**: Validates against known good outputs
Adding Tests
~~~~~~~~~~~~
============
When adding new functionality:
1. **Add Rust tests** in the appropriate module
2. **Add Python integration tests** in ``tests/``
3. **Update regression tests** if output format changes
4. **Add property tests** for mathematical functions
#. **Add Rust tests** in the appropriate module
#. **Add Python integration tests** in ``tests/``
#. **Update regression tests** if output format changes
#. **Add property tests** for mathematical functions
***************
Documentation
-------------
***************
Building Documentation
~~~~~~~~~~~~~~~~~~~~~~~
======================
**Prerequisites**: You need the Rust toolchain installed for ``sphinxcontrib-rust`` to work.
**Prerequisites**: You need the Rust toolchain installed for
``sphinxcontrib-rust`` to work.
.. code-block:: bash
.. code:: bash
# Install documentation dependencies
poetry install --with docs
@@ -149,117 +157,121 @@ Building Documentation
make serve
Documentation Structure
~~~~~~~~~~~~~~~~~~~~~~~
=======================
* **docs/conf.py**: Sphinx configuration
* **docs/index.rst**: Main documentation page
* **docs/python/**: Python API documentation
* **docs/rust/**: Rust API documentation
* **docs/*.rst**: User guides and tutorials
- **docs/conf.py**: Sphinx configuration
- **docs/index.rst**: Main documentation page
- **docs/python/**: Python API documentation
- **docs/rust/**: Rust API documentation
- **docs/\*.rst**: User guides and tutorials
The documentation automatically generates API references from:
* Python docstrings and type hints
* Rust documentation comments (``///`` and ``//!``)
* Type stub files (``*.pyi``)
- Python docstrings and type hints
- Rust documentation comments (``///`` and ``//!``)
- Type stub files (``*.pyi``)
**Note**: For Rust API documentation to work properly, you need:
1. Rust toolchain installed (cargo, rustfmt)
2. Proper Rust doc comments in your source code
3. The ``sphinxcontrib-rust`` extension configured correctly
#. Rust toolchain installed (cargo, rustfmt)
#. Proper Rust doc comments in your source code
#. The ``sphinxcontrib-rust`` extension configured correctly
**************
Contributing
------------
**************
Workflow
~~~~~~~~
========
1. **Fork the repository** on GitHub
2. **Create a feature branch** from ``main``
3. **Make your changes** following the coding standards
4. **Add tests** for new functionality
5. **Update documentation** as needed
6. **Run the full test suite** to ensure everything works
7. **Submit a pull request**
#. **Fork the repository** on GitHub
#. **Create a feature branch** from ``main``
#. **Make your changes** following the coding standards
#. **Add tests** for new functionality
#. **Update documentation** as needed
#. **Run the full test suite** to ensure everything works
#. **Submit a pull request**
Pre-commit Checks
~~~~~~~~~~~~~~~~~
=================
The project uses pre-commit hooks that run automatically:
* Code formatting (Ruff, rustfmt)
* Linting (Ruff, Clippy)
* Type checking (MyPy)
* Version bump validation
* Poetry validation
- Code formatting (Ruff, rustfmt)
- Linting (Ruff, Clippy)
- Type checking (MyPy)
- Version bump validation
- Poetry validation
These checks must pass before commits are accepted.
Release Process
~~~~~~~~~~~~~~~
===============
1. **Update version** in ``Cargo.toml`` (triggers version validation)
2. **Update changelog** if applicable
3. **Ensure all tests pass**
4. **Create a release** on GitHub
5. **CI automatically builds and publishes** wheels to PyPI
#. **Update version** in ``Cargo.toml`` (triggers version validation)
#. **Update changelog** if applicable
#. **Ensure all tests pass**
#. **Create a release** on GitHub
#. **CI automatically builds and publishes** wheels to PyPI
********************
Architecture Notes
------------------
********************
The library is structured in two main components:
Rust Core (``src/rust_fn/``)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
============================
* **High-performance file I/O** using CSV reader
* **Parallel processing** with Rayon
* **Memory-efficient array operations** with ndarray
* **Coordinate correction algorithms**
- **High-performance file I/O** using CSV reader
- **Parallel processing** with Rayon
- **Memory-efficient array operations** with ndarray
- **Coordinate correction algorithms**
Python Bindings (``src/lib.rs``)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
================================
* **PyO3 integration** for seamless Python interop
* **Error handling** conversion from Rust to Python exceptions
* **NumPy integration** for zero-copy array passing
* **Type annotations** via stub files
- **PyO3 integration** for seamless Python interop
- **Error handling** conversion from Rust to Python exceptions
- **NumPy integration** for zero-copy array passing
- **Type annotations** via stub files
Performance Considerations
~~~~~~~~~~~~~~~~~~~~~~~~~~
==========================
* File I/O is the primary bottleneck
* Parallel processing scales well with core count
* Memory usage is proportional to dataset size
* Coordinate corrections use vectorized operations
- File I/O is the primary bottleneck
- Parallel processing scales well with core count
- Memory usage is proportional to dataset size
- Coordinate corrections use vectorized operations
**************************
Common Development Tasks
------------------------
**************************
Adding a New Function
~~~~~~~~~~~~~~~~~~~~~~
=====================
1. **Implement in Rust** (``src/rust_fn/mod.rs``)
2. **Add Python binding** (``src/lib.rs``)
3. **Update type stubs** (``read_layers.pyi``)
4. **Add tests** for both Rust and Python
5. **Update documentation**
#. **Implement in Rust** (``src/rust_fn/mod.rs``)
#. **Add Python binding** (``src/lib.rs``)
#. **Update type stubs** (``read_layers.pyi``)
#. **Add tests** for both Rust and Python
#. **Update documentation**
Debugging Build Issues
~~~~~~~~~~~~~~~~~~~~~~
======================
* **Check Rust version**: Must be 1.70+
* **Verify PyO3 compatibility**: Should match Python version
* **Clear build cache**: ``cargo clean`` and ``poetry env remove --all``
* **Check dependencies**: Ensure all dev dependencies are installed
- **Check Rust version**: Must be 1.70+
- **Verify PyO3 compatibility**: Should match Python version
- **Clear build cache**: ``cargo clean`` and ``poetry env remove
--all``
- **Check dependencies**: Ensure all dev dependencies are installed
Profiling Performance
~~~~~~~~~~~~~~~~~~~~~
=====================
For Rust code:
.. code-block:: bash
.. code:: bash
# Profile with perf (Linux)
cargo build --release
@@ -268,7 +280,7 @@ For Rust code:
For Python integration:
.. code-block:: bash
.. code:: bash
# Profile with py-spy
pip install py-spy

View File

@@ -12,9 +12,10 @@
.. include:: readme_bottom.rst
####################
Indices and tables
==================
####################
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
- :ref:`genindex`
- :ref:`modindex`
- :ref:`search`

View File

@@ -1,98 +1,104 @@
##############
Installation
============
##############
**************
Requirements
------------
**************
* Python 3.11 or higher
* NumPy 2.0.0 or higher
- Python 3.11 or higher
- NumPy 2.0.0 or higher
*************************
From PyPI (Recommended)
-----------------------
*************************
.. code-block:: bash
.. code:: bash
pip install read-aconity-layers
*************
From Source
-----------
*************
If you want to build from source or contribute to the project:
Prerequisites
~~~~~~~~~~~~~
=============
* Rust 1.70 or higher
* Python 3.11 or higher
* Poetry (for development)
- Rust 1.70 or higher
- Python 3.11 or higher
- Poetry (for development)
Build Steps
~~~~~~~~~~~
===========
1. Clone the repository:
#. Clone the repository:
.. code-block:: bash
.. code:: bash
git clone https://github.com/Cian-H/read_aconity_layers.git
cd read_aconity_layers
2. Install Python dependencies:
#. Install Python dependencies:
.. code-block:: bash
.. code:: bash
poetry install
3. Build the Rust extension:
#. Build the Rust extension:
.. code-block:: bash
.. code:: bash
poetry run maturin develop
4. Run tests to verify installation:
#. Run tests to verify installation:
.. code-block:: bash
.. code:: bash
poetry run pytest
**************************
Development Installation
------------------------
**************************
For development work, you'll also want the development dependencies:
.. code-block:: bash
.. code:: bash
poetry install --with dev,docs
This installs additional tools for:
* Code formatting (ruff)
* Type checking (mypy)
* Testing (pytest)
* Documentation building (sphinx)
- Code formatting (ruff)
- Type checking (mypy)
- Testing (pytest)
- Documentation building (sphinx)
*****************
Troubleshooting
---------------
*****************
Common Issues
~~~~~~~~~~~~~
=============
**Import Error**: If you get import errors, make sure you've run ``maturin develop``
to build the Rust extension.
**Import Error**: If you get import errors, make sure you've run
``maturin develop`` to build the Rust extension.
**Performance Issues**: The library uses parallel processing by default. If you
encounter memory issues with very large datasets, consider processing files in
smaller batches.
**Performance Issues**: The library uses parallel processing by default.
If you encounter memory issues with very large datasets, consider
processing files in smaller batches.
**Rust Compilation Errors**: Make sure you have a recent version of Rust installed.
The minimum supported version is 1.70.
**Rust Compilation Errors**: Make sure you have a recent version of Rust
installed. The minimum supported version is 1.70.
Platform Notes
~~~~~~~~~~~~~~
==============
**Windows**: You may need to install the Microsoft C++ Build Tools if you don't
already have them.
**Windows**: You may need to install the Microsoft C++ Build Tools if
you don't already have them.
**macOS**: Xcode command line tools are required for Rust compilation.
**Linux**: Most distributions should work out of the box. You may need to install
``build-essential`` on Debian/Ubuntu systems.
**Linux**: Most distributions should work out of the box. You may need
to install ``build-essential`` on Debian/Ubuntu systems.

View File

@@ -1,25 +1,30 @@
######################
Python API Reference
====================
######################
This section contains the complete Python API reference for ``read_aconity_layers``.
This section contains the complete Python API reference for
``read_aconity_layers``.
********
Module
-----------
********
.. automodule:: read_aconity_layers
:members:
:undoc-members:
:show-inheritance:
*************************
Return Type Information
-----------------------
*************************
The library includes comprehensive type stubs for full IDE support and type checking.
The library includes comprehensive type stubs for full IDE support and
type checking.
All functions return NumPy arrays with the following structure:
* **Column 0**: X coordinates (corrected)
* **Column 1**: Y coordinates (corrected)
* **Column 2**: Z coordinates (layer height)
* **Column 3**: Pyrometer 1 readings
* **Column 4**: Pyrometer 2 readings
- **Column 0**: X coordinates (corrected)
- **Column 1**: Y coordinates (corrected)
- **Column 2**: Z coordinates (layer height)
- **Column 3**: Pyrometer 1 readings
- **Column 4**: Pyrometer 2 readings

View File

@@ -1,17 +1,20 @@
##################
Quickstart Guide
================
##################
This guide will get you up and running with ``read_aconity_layers`` in just a few minutes.
This guide will get you up and running with ``read_aconity_layers`` in
just a few minutes.
*************
Basic Usage
-----------
*************
Reading All Layers from a Directory
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
===================================
The most common use case is reading all layer files from a directory:
.. code-block:: python
.. code:: python
import read_aconity_layers as ral
import numpy as np
@@ -24,29 +27,25 @@ The most common use case is reading all layer files from a directory:
print(f"Columns: [x, y, z, data1, data2]")
Reading Specific Files
~~~~~~~~~~~~~~~~~~~~~~
======================
If you want to read only specific layer files:
.. code-block:: python
.. code:: python
import read_aconity_layers as ral
# List of specific files to read
files = [
"/path/to/0.1.pcd",
"/path/to/0.2.pcd",
"/path/to/0.3.pcd"
]
files = ["/path/to/0.1.pcd", "/path/to/0.2.pcd", "/path/to/0.3.pcd"]
data = ral.read_selected_layers(files)
Reading a Single Layer
~~~~~~~~~~~~~~~~~~~~~~
======================
For processing individual layers:
.. code-block:: python
.. code:: python
import read_aconity_layers as ral
@@ -58,27 +57,28 @@ For processing individual layers:
y_coords = layer_data[:, 1]
z_coords = layer_data[:, 2]
***********************
Working with the Data
---------------------
***********************
Understanding the Data Format
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
=============================
All functions return NumPy arrays with 5 columns:
* **Column 0**: X coordinates (corrected)
* **Column 1**: Y coordinates (corrected)
* **Column 2**: Z coordinates (layer height)
* **Column 3**: Original data column 3
* **Column 4**: Original data column 4
- **Column 0**: X coordinates (corrected)
- **Column 1**: Y coordinates (corrected)
- **Column 2**: Z coordinates (layer height)
- **Column 3**: Original data column 3
- **Column 4**: Original data column 4
The X and Y coordinates are automatically corrected using the calibration
formulas built into the library.
The X and Y coordinates are automatically corrected using the
calibration formulas built into the library.
Example: Basic Data Analysis
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
============================
.. code-block:: python
.. code:: python
import read_aconity_layers as ral
import numpy as np
@@ -98,15 +98,15 @@ Example: Basic Data Analysis
plt.figure(figsize=(10, 6))
plt.plot(unique_z, layer_counts)
plt.xlabel('Layer Height (Z)')
plt.ylabel('Number of Points')
plt.title('Points per Layer')
plt.xlabel("Layer Height (Z)")
plt.ylabel("Number of Points")
plt.title("Points per Layer")
plt.show()
Example: Processing by Layer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
============================
.. code-block:: python
.. code:: python
import read_aconity_layers as ral
import numpy as np
@@ -123,55 +123,59 @@ Example: Processing by Layer
layer_points = data[layer_mask]
stats = {
'z': z,
'point_count': len(layer_points),
'x_mean': layer_points[:, 0].mean(),
'y_mean': layer_points[:, 1].mean(),
'data1_mean': layer_points[:, 3].mean(),
'data2_mean': layer_points[:, 4].mean(),
"z": z,
"point_count": len(layer_points),
"x_mean": layer_points[:, 0].mean(),
"y_mean": layer_points[:, 1].mean(),
"data1_mean": layer_points[:, 3].mean(),
"data2_mean": layer_points[:, 4].mean(),
}
layer_stats.append(stats)
# Convert to structured array for easier analysis
layer_stats = np.array(layer_stats)
******************
Performance Tips
----------------
******************
Parallel Processing
~~~~~~~~~~~~~~~~~~~
===================
The library automatically uses parallel processing for multiple files.
For best performance:
* Use ``read_layers()`` for directories with many files
* The library will automatically use all available CPU cores
* Larger numbers of files will see better speedup
- Use ``read_layers()`` for directories with many files
- The library will automatically use all available CPU cores
- Larger numbers of files will see better speedup
Memory Usage
~~~~~~~~~~~~
============
For very large datasets:
* Consider processing files in batches if memory is limited
* Use ``read_selected_layers()`` to process subsets
* The library streams data efficiently, but the final arrays are held in memory
- Consider processing files in batches if memory is limited
- Use ``read_selected_layers()`` to process subsets
- The library streams data efficiently, but the final arrays are held
in memory
File Organization
~~~~~~~~~~~~~~~~~
=================
For optimal performance:
* Keep layer files in a single directory when using ``read_layers()``
* Use consistent naming (the Z coordinate is extracted from the filename)
* Ensure files are properly formatted space-delimited text
- Keep layer files in a single directory when using ``read_layers()``
- Use consistent naming (the Z coordinate is extracted from the
filename)
- Ensure files are properly formatted space-delimited text
****************
Error Handling
--------------
****************
The library provides detailed error messages for common issues:
.. code-block:: python
.. code:: python
import read_aconity_layers as ral
@@ -182,9 +186,11 @@ The library provides detailed error messages for common issues:
except RuntimeError as e:
print(f"Processing error: {e}")
************
Next Steps
----------
************
* Check out the full :doc:`python/index` for detailed function documentation
* See :doc:`development` if you want to contribute to the project
* For performance-critical applications, review the :doc:`rust/index`
- Check out the full :doc:`python/index` for detailed function
documentation
- See :doc:`development` if you want to contribute to the project
- For performance-critical applications, review the :doc:`rust/index`

View File

@@ -1,20 +1,27 @@
##########
Overview
--------
##########
``read_aconity_layers`` is a high-performance Python library for reading and processing layer data from Aconity mini powder bed fusion machines. It's built with Rust for maximum performance and uses PyO3 for seamless Python integration.
``read_aconity_layers`` is a high-performance Python library for reading
and processing layer data from Aconity mini powder bed fusion machines.
It's built with Rust for maximum performance and uses PyO3 for seamless
Python integration.
##########
Features
--------
##########
* **Fast**: Built with Rust for high-performance data processing
* **Simple**: Easy-to-use Python API
* **Parallel**: Leverages Rayon for parallel processing of multiple files
* **Type-safe**: Full type annotations and stub files included
- **Fast**: Built with Rust for high-performance data processing
- **Simple**: Easy-to-use Python API
- **Parallel**: Leverages Rayon for parallel processing of multiple
files
- **Type-safe**: Full type annotations and stub files included
###############
Quick Example
-------------
###############
.. code-block:: python
.. code:: python
import read_aconity_layers as ral
import numpy as np
@@ -28,4 +35,3 @@ Quick Example
# Read a single layer
layer = ral.read_layer("/path/to/0.01.pcd")

View File

@@ -1,5 +1,6 @@
#####################
read_aconity_layers
==================================
#####################
.. image:: https://github.com/Cian-H/read_aconity_layers/workflows/CI/badge.svg
:target: https://github.com/Cian-H/read_aconity_layers/actions/workflows/CI.yml
@@ -28,7 +29,7 @@ read_aconity_layers
.. image:: https://img.shields.io/badge/code%20style-Ruff-D7FF64.svg
:target: https://github.com/astral-sh/ruff
==================================
A library for fast reading of layer data from the aconity mini powder bed fusion machine.
----
A library for fast reading of layer data from the aconity mini powder
bed fusion machine.

View File

@@ -1,12 +1,15 @@
####################
Rust API Reference
==================
####################
This section documents the internal Rust implementation of ``read_aconity_layers``.
This section documents the internal Rust implementation of
``read_aconity_layers``.
.. note::
This documentation is primarily intended for contributors and developers
who want to understand the internal implementation. Most users should
refer to the :doc:`../python/index` instead.
This documentation is primarily intended for contributors and
developers who want to understand the internal implementation. Most
users should refer to the :doc:`../python/index` instead.
.. toctree::
:glob:
@@ -14,44 +17,52 @@ This section documents the internal Rust implementation of ``read_aconity_layers
crates/read_aconity_layers/lib
**********
Overview
--------
**********
The Rust implementation provides the high-performance core of ``read_aconity_layers``.
Key characteristics include:
The Rust implementation provides the high-performance core of
``read_aconity_layers``. Key characteristics include:
Performance Features
~~~~~~~~~~~~~~~~~~~~
====================
* **Parallel Processing**: Uses Rayon for parallel file reading across all CPU cores
* **Memory Efficiency**: Streams data rather than loading everything into memory at once
* **SIMD Operations**: Leverages vectorized operations for coordinate corrections
* **Zero-Copy**: Minimizes data copying between Rust and Python using PyO3
- **Parallel Processing**: Uses Rayon for parallel file reading across
all CPU cores
- **Memory Efficiency**: Streams data rather than loading everything
into memory at once
- **SIMD Operations**: Leverages vectorized operations for coordinate
corrections
- **Zero-Copy**: Minimizes data copying between Rust and Python using
PyO3
Architecture
~~~~~~~~~~~~
============
The crate is organized into two main components:
* **Public API** (``src/lib.rs``): PyO3 bindings that expose Rust functions to Python
* **Core Logic** (``src/rust_fn/``): Pure Rust implementation of file reading and processing
- **Public API** (``src/lib.rs``): PyO3 bindings that expose Rust
functions to Python
- **Core Logic** (``src/rust_fn/``): Pure Rust implementation of file
reading and processing
Error Handling
~~~~~~~~~~~~~~
==============
The Rust code uses a comprehensive ``ReadError`` enum that covers all possible
failure modes, from I/O errors to parsing failures. These are automatically
converted to appropriate Python exceptions through the PyO3 integration.
The Rust code uses a comprehensive ``ReadError`` enum that covers all
possible failure modes, from I/O errors to parsing failures. These are
automatically converted to appropriate Python exceptions through the
PyO3 integration.
Dependencies
~~~~~~~~~~~~
============
Key Rust dependencies that power the performance:
* ``ndarray`` - N-dimensional arrays with BLAS integration
* ``rayon`` - Data parallelism library
* ``csv`` - Fast CSV parsing
* ``pyo3`` - Python bindings
* ``numpy`` - NumPy integration for PyO3
* ``glob`` - File path pattern matching
* ``indicatif`` - Progress bars for long operations
- ``ndarray`` - N-dimensional arrays with BLAS integration
- ``rayon`` - Data parallelism library
- ``csv`` - Fast CSV parsing
- ``pyo3`` - Python bindings
- ``numpy`` - NumPy integration for PyO3
- ``glob`` - File path pattern matching
- ``indicatif`` - Progress bars for long operations