Mackes Audio Platform

Audio Processing Platform

Built for environments where routing, latency, and state must stay predictable.

MAP2 combines a JUCE/C++ audio engine, a Python control plane, and remote operator surfaces into one inspectable Linux system. It can run headless, expose documented control paths, and boot into a defined configuration on commodity x86_64 hardware.

The design goal is straightforward: no hidden control layers, no opaque signal path changes, and no confusion about where audio, state, and timing decisions are made.

Public LinkedIn profile

About the Project Founder: Matthew Mackes

View profile on LinkedIn

Matthew Mackes’ public LinkedIn profile describes a cybersecurity leader with more than 25 years of hands-on keyboard experience.

Profile Matthew Mackes
Location Buffalo, New York
Current title Cybersecurity leader
Recent MAP2 LinkedIn activity

: the public LinkedIn activity feed notes that the new MAP2 product page is up, alongside a larger batch of recent changes and feature work.

Read the public post
Source status

Current repository signals

Source Repo Nightly Main Branch
Last nightly build time Loading from GitHub... `MAP2-RELEASES/nightly` activity
Last source repo commit time 2026-03-20 10:17 -04:00 Commit `c8a15247` on `matthewmackes/map2-audio`
Current source branch main Reference branch for current platform development
Pages site status Live on GitHub Pages Status box refreshes from the GitHub API at runtime
Install MAP on Fedora Server

Start with Fedora Server.

Download Fedora Server

MAP2 targets a Fedora Server base because it provides a clean Linux host, current packages, service-first deployment, and a practical path toward repeatable headless audio systems.

  1. Download Fedora Server: get the current official Fedora Server image from the Fedora Project.
  2. Install Fedora Server: perform a clean server install on the target machine and apply system updates.
  3. Prepare the audio host: configure networking, storage, and the hardware interfaces you plan to use.
  4. Install MAP2: use the MAP2 nightly release or source repository to continue platform setup and evaluation.
<3ms Low-latency target with callback timing instrumentation and sustained-load qualification work.
50+ Documented capabilities across DSP, routing, MIDI, AVB, control, and deployment surfaces.
3 layers One platform composed of a real-time engine, control plane, and remote operator surfaces.

System behavior.

The platform is defined less by feature count than by how it behaves under load. Latency, routing, control, and recovery are treated as system properties, not UI claims.

Latency model

Fixed-path processing

  • Sample-rate aware engine: processing runs inside a JUCE real-time callback with explicit buffer and device configuration.
  • Predictable scheduling intent: real-time priority, memory locking, and CPU isolation work are used to reduce timing variance.
  • No hidden resampling path by default: the platform is built around direct control of device rate, buffer size, and graph state.
Stability model

Designed to stay in a known state

  • Headless operation: the system can run without a local display and be managed remotely through defined control surfaces.
  • Sustained-load observability: xruns, callback timing, CPU load, and service status are exposed for inspection.
  • No implicit background reconfiguration: engine state, presets, and service boundaries are explicit and inspectable.
Routing and failure behavior

Signal paths stay visible

  • Deterministic routing: chains, parallel paths, bypass state, and preset recall are modeled directly instead of hidden behind scene logic.
  • Failure visibility: device loss, xruns, and runtime faults are surfaced through logs, metrics, and control endpoints.
  • Graceful degradation target: control remains available even when hardware conditions change.

Core capabilities.

These sections describe what the platform actually does. Each one maps to code, services, protocols, or documented operating behavior in the source tree.

Core audio

Real-time audio engine

  • Fixed-latency processing path: device rate, block size, callback timing, and xrun behavior are configurable and observable.
  • Deterministic graph control: chains, bypass state, ordering, and snapshots are explicit runtime objects.
  • Sustained-load visibility: the engine exposes callback metrics instead of hiding real-time behavior.
  • Parallel path support: dual-chain and split/merge workflows allow deliberate signal-graph construction.
DSP and effects

Effects and tone processing

  • Neural Amp Modeler integration: CPU-hosted NAM processing is available inside the live graph.
  • Convolution support: impulse responses are part of the signal path for cabinet and space processing.
  • Built-in DSP inventory: dynamics, EQ, filters, pitch, delay, modulation, and reverb are available without third-party dependencies.
  • LV2 hosting: external processors can be discovered, loaded, and controlled through the same graph model.
Control layer

API, UI, and observability

  • Documented control plane: FastAPI endpoints expose engine, preset, routing, and system state.
  • Remote operator surfaces: browser and terminal interfaces control the same underlying platform state.
  • WebSocket streaming: meters, status, and runtime signals are available without polling every state change.
  • Inspectable diagnostics: health and performance data are exposed instead of buried in ad hoc tooling.
MIDI and workflows

MIDI hub and remote performance control

  • MIDI routing and mapping: controller input can be learned, forwarded, and fanned out across multiple targets.
  • Preset recall: program-style changes and snapshot workflows are treated as explicit state transitions.
  • Transport and automation paths: control messages are part of the platform design, not an afterthought.
  • Remote-first operation: control remains available through browser and text interfaces on headless systems.
Network audio

AVB and multi-node operation

  • AVB-aware networking: the platform includes gPTP, AVDECC, and enumeration work for deterministic Ethernet audio systems.
  • Control-plane clustering: multiple nodes can be monitored and managed through one control surface.
  • Distributed system design: the architecture extends beyond single-box processing into multi-node topologies.
  • Hardware-sensitive deployment: NIC and timing assumptions are treated as engineering constraints.
System operation

Deployment and platform services

  • Appliance-style deployment: systemd services, host tuning, and startup ordering are part of the platform design.
  • PipeWire and JACK integration: Linux backend behavior is explicit rather than assumed.
  • Persistent state: presets, chain definitions, and cached metadata survive restarts as defined configuration.
  • Technical runbooks: documentation covers architecture, qualification, and deployment details.
MAP2 platform layers diagram

Top 100 features

This is the full crawlable MAP2 feature inventory drawn from the source document. It is published as normal HTML so search engines can index the platform's scope across guitar processing, Linux audio, MIDI routing, AVB networking, clustering, and system control.

1-35

Guitar, effects, MIDI and hardware control

  1. Neural Amp Modeler (NAM) Integration: Utilizes AI for hyper-realistic guitar and bass amplifier simulations.
  2. Impulse Response (IR) Convolution Engine: Load, manage, and use custom IR files for cabinet, speaker, and room simulation.
  3. Studio-Grade Effects Suite: Comprehensive collection of built-in effects for complete signal chain construction.
  4. Vintage Amp Emulations: Includes classic models such as the S1S0 and Hood Tweed Bassman.
  5. Advanced Modulation and Pitch Effects: Ultra-Harmonizer, Poly Shifter, and ambient Shoe Gaze processors.
  6. Full Dynamics Suite: Built-in Compressor, Limiter, Noise Gate, and Expander.
  7. Time-Based Effects: High-quality Chorus, Phaser, Flanger, Delay, and Reverb processors.
  8. Comprehensive Filtering: Parametric EQs, Graphic EQs, High-Pass Filters, and Low-Pass Filters.
  9. LV2 Plugin Host: Natively supports and hosts third-party LV2 plugins for near-limitless tonal expansion.
  10. Dual Processing Chains: Run two fully independent effects chains in parallel for complex routing.
  11. A/B Chain Morphing: Seamlessly crossfade and interpolate parameters between two distinct chains.
  12. Series and Parallel Path Routing: Split and merge the signal path within a chain for advanced effects blending.
  13. Plugin Preset Management: Save, load, and manage presets for individual plugins.
  14. Chain Snapshots: Save and recall the entire state of a signal path, including all plugins and their settings.
  15. MIDI Learn Functionality: Instantly map any MIDI CC message to any plugin parameter for hands-free control.
  16. Non-Consumptive MIDI CC Processing: CC messages pass through all plugins, allowing one controller to modulate multiple effects.
  17. MIDI CC Fan-Out: A single MIDI controller can be mapped to and control parameters on multiple different plugins simultaneously.
  18. Series MIDI Flow Architecture: MIDI messages flow sequentially through the plugin chain by default.
  19. Parallel MIDI Branching: Split a single MIDI input to control multiple instruments or effects in parallel.
  20. MIDI Program Change Support: For preset and chain switching from external MIDI controllers.
  21. Hardware LCD Multi-Page UI: Standalone interface inspired by high-end processors like Kemper, Helix, and Axe-FX.
  22. Standalone Operation via LCD: Full control without a connected computer, ideal for live performance.
  23. LCD Status Page: At-a-glance view of sample rate, buffer size, CPU load, and the active chain name.
  24. LCD VU Meters Page: Real-time stereo level meters with peak-hold functionality and configurable color zones.
  25. LCD Chain Page: Visually displays the current effects chain with a scrollable list of active plugins.
  26. LCD Plugin Browser: Scroll through all available LV2 plugins and view their bypass status directly on the hardware.
  27. LCD MIDI Activity Page: Monitor connected MIDI devices and see real-time message activity.
  28. LCD Performance Page: Detailed, real-time metrics for CPU load, xruns, and audio callback times.
  29. Hardware Rotary Encoder Support: Navigate menus, scroll lists, and adjust parameters with a physical knob.
  30. Hardware Navigation Button Support: Dedicated GPIO inputs for Up, Down, Select, Menu, and Back buttons.
  31. Dual LCD Display Support: Power two separate hardware displays for expanded information views.
  32. Custom LCD Characters: Renders graphical VU meter bars and status icons.
  33. LCD Backlight Control: Software-controllable backlight for the hardware display.
  34. LCD Screensaver: Prevents screen burn-in during idle periods.
  35. Hot-Swappable MIDI Devices: Connect and disconnect MIDI controllers without restarting the audio engine.
36-52

Core audio and performance

  1. Ultra-Low Latency Audio Engine: JUCE 8.0-based core delivers sub-3ms round-trip latency on optimized hardware.
  2. Real-time SCHED_FIFO Processing: Audio threads run at the highest real-time priority for maximum stability.
  3. CPU Core Isolation: Dedicates specific CPU cores exclusively to audio processing, preventing OS interference.
  4. Memory Locking: Prevents page faults on the real-time audio thread, eliminating potential glitches.
  5. PipeWire-Native with JACK Compatibility: Modern Linux audio backend with explicit JACK interoperability.
  6. High-Resolution Audio Support: Process audio at professional sample rates up to 192kHz.
  7. Configurable Buffer Sizes: Tune latency versus stability with buffer sizes from 64 to 256 samples and beyond.
  8. Automatic Plugin Delay Compensation: Maintains perfect phase alignment across the entire plugin chain.
  9. Zero-Allocation Real-time Audio Path: Pre-allocates all memory to prevent dynamic allocations that could cause xruns.
  10. XRun Detection: Actively monitors for audio dropouts and provides detailed logging and recovery.
  11. Professional Metering Suite: Includes Spectrum Analyzer, LUFS Loudness, VU Meters, and Phase Correlation.
  12. Hot-Swappable USB Audio Interfaces: Add or remove audio hardware without restarting the service.
  13. Optimized Eigen-based NAM Inference: Neural Amp Modeler backend is optimized for high-performance CPU inference.
  14. Graceful Hardware Degradation: The system remains operational and controllable even if audio hardware is disconnected.
  15. Fixed-Point and Floating-Point Processing: Supports various bit depths for maximum audio quality.
  16. IRQ Balancing Management: Disables IRQ balancing on dedicated audio cores to ensure consistent low latency.
  17. Asynchronous Audio Device Handling: Audio hardware changes are handled in the background without blocking the main application.
53-72

System, management and control

  1. FastAPI REST API: Comprehensive API with over 50 endpoints for deep system control and integration.
  2. Real-time WebSocket Streaming: Pushes live metering data to web and TUI clients.
  3. Modern React 18 Web UI: Fully-featured, browser-based interface for control from any device.
  4. Textual-based TUI: A fast, lightweight text-based UI for remote management over SSH.
  5. Multi-User Session Support: Allows for concurrent user sessions with workspace isolation.
  6. SQLite Database Backend: Manages presets, chains, plugin lists, and system configuration.
  7. Async Database Operations: Uses aiosqlite to prevent database queries from blocking the main event loop.
  8. Chain as a Reusable Recipe: A saved Chain is an inert, reusable JSON definition for a specific sound.
  9. Flow as a Runtime Instance: A Flow is a Chain that has been actively deployed to a node for execution.
  10. Flow Orchestrator Service: Manages the deployment, state, and lifecycle of audio processing flows.
  11. Centralized Cluster Management: Use a single UI to control and monitor an entire cluster of MAP2 nodes.
  12. Preset Library with Import/Export: Easily share and back up your sounds.
  13. System Health Monitoring and Diagnostics: API endpoints and UI panels for monitoring system status.
  14. Automatic Service Orchestration: systemd services manage startup, shutdown, and dependencies.
  15. Background Metrics Daemon: Collects performance data without interfering with real-time audio threads.
  16. Plugin Discovery and Scanning: Automatically scans for and registers new LV2 plugins on startup.
  17. LCD Simulation Mode: Run the LCD interface in a terminal without any hardware for testing and development.
  18. Interactive LCD Setup Wizard: A command-line tool guides users through hardware detection and configuration.
  19. Python-based Control Plane: The entire backend and management system is built on modern, asynchronous Python.
  20. Multi-Platform Client Support: Control the system from any modern web browser on any operating system.
73-100

Platform, architecture and networking

  1. Distributed Multi-Node Cluster System: Scale processing power by linking multiple MAP2 units over a network.
  2. Audio Video Bridging Support: Enables deterministic, time-synchronized, low-latency audio streaming over standard Ethernet.
  3. Digital Snake Capability: Use AVB to transport many channels of audio over a single Ethernet cable.
  4. Distributed DSP and CPU Load Balancing: Split a CPU-heavy effects chain across multiple nodes in an AVB network.
  5. Interoperability with Pro AVB Gear: Connect and stream audio to and from third-party AVB devices.
  6. gPTP Time Synchronization: Achieves sub-microsecond clock sync across all nodes on an AVB network.
  7. AVDECC Device Discovery: Automatically discovers and identifies other AVB-capable devices on the network.
  8. AEM Device Enumeration: Queries and understands the capabilities of other AVB devices.
  9. AEM Caching Database: Caches discovered AVB device information to accelerate system startup.
  10. Dual-Mode Networking: Can operate in a control-only IP mode or a full audio-streaming AVB mode.
  11. All-In-One Operating Mode: Runs the audio engine, backend, and UI on a single machine for convenience.
  12. Dedicated Audio Node Mode: Runs only the audio engine for minimum latency and maximum stability in a cluster.
  13. Dedicated Control Node Mode: Runs only the management backend and UI to control a cluster.
  14. High-Availability Architecture: Design includes primary and standby nodes for critical applications.
  15. Automatic Failover Promotion: The Flow Orchestrator can automatically promote a standby node if a primary node fails.
  16. Support for Intel I210/I225 NICs: Works with specific, professional-grade network hardware for AVB.
  17. ptp4l Integration: Leverages the standard Linux PTP daemon for network time synchronization.
  18. JSON-based Chain Configuration: Chains are stored in a human-readable and easily shareable format.
  19. Headless Operation: Can run entirely without a connected display or user interface, managed remotely.
  20. systemd Service Integration: Deployed as robust, manageable services for production environments.
  21. Bare Metal Target: Designed for deployment on Fedora Linux with a real-time kernel.
  22. Phased AVB Implementation Roadmap: A clear, documented plan for future AVB features like connection management.
  23. Modular, Service-Oriented Architecture: Components are designed as independent, communicating services.
  24. Asyncio-based Event Loop: Built on Python's modern asynchronous framework for high I/O throughput.
  25. Scalability from Single Node to Large Cluster: The architecture is designed to scale from one to 10+ nodes.
  26. Extensible Data Model: The database schema is designed to be extensible for future features.
  27. Hardware Abstraction Layer: Code is separated from specific hardware to allow for future flexibility.
  28. Comprehensive Documentation: A full suite of technical documents covers architecture, features, and implementation details.

Integration and openness.

MAP2 is open in the practical sense. Interfaces are documented. State is inspectable. Control does not depend on hidden services or opaque cloud paths.

Documented interfaces

The platform exposes control through FastAPI endpoints, WebSocket streams, browser tooling, terminal tooling, and hardware-oriented interfaces. The same system can be inspected from code, from the API surface, and from the running service graph.

  • API control: engine, preset, routing, and status operations are exposed through documented endpoints.
  • Control surfaces: browser UI, TUI, and LCD-style hardware interaction all target defined system state.
  • Interoperability: MIDI, LV2, PipeWire, JACK, and AVB-related work connect MAP2 to existing Linux and pro-audio ecosystems.
Open-source posture: all behavior is inspectable. No opaque processing layer. No hidden control path. The system can be studied, modified, and operated as a real audio platform rather than consumed as a closed appliance.

System architecture.

MAP2 separates time-critical audio work from orchestration and operator control. That separation is what keeps the system understandable under load.

Layer 1

JUCE / C++ audio engine

This layer owns DSP, plugin execution, signal chains, and device-facing callback behavior. It is where latency and signal integrity are decided.

  • Role: run the signal path with tight timing, explicit memory discipline, and measurable callback behavior.
  • Result: audio processing stays separate from higher-latency control concerns.
Layer 2

Python / FastAPI control plane

This layer manages orchestration, persistence, diagnostics, and remote control. It keeps lifecycle and state management out of the audio callback.

  • Role: expose control APIs, coordinate services, and maintain state without disturbing the real-time layer.
  • Result: the system remains scriptable, observable, and operable over the network.
Layer 3

Web UI and operator surfaces

Operator surfaces present the running system without hiding it. Browser, terminal, and LCD-style interfaces are all views onto the same underlying platform state.

  • Role: provide practical control and monitoring for headless or embedded deployments.
  • Result: the system behaves like an appliance without becoming a black box.
Cross-layer behavior

Why the split matters

  • Stable boundaries: DSP stays in the engine while orchestration stays in the control plane.
  • Inspectable operation: Linux tuning, service state, and device behavior remain visible to the operator.
  • Scalable topology: the same design can describe a single headless box or a multi-node control surface.
  • No hidden behavior: routing, presets, and control decisions remain traceable across layers.

Deployment patterns.

The same platform can be configured for different roles without turning into different products. These patterns reflect how the system is intended to be used.

Headless stage processor

A small x86_64 host running fixed signal paths, preset recall, MIDI control, and remote browser management without a local screen.

Rack and installed-sound node

A service-managed Linux processor that boots into known state, exposes remote control, and stays stable through long operating windows.

Open DSP development bench

A working platform for validating callback behavior, plugin hosting, routing policy, and Linux audio tuning on real hardware.

MIDI and control hub

A central node for controller mapping, automation, preset changes, and API-based coordination with external systems.

AVB-aware network audio platform

A Linux system for studying and extending timing-sensitive audio-over-Ethernet workflows using inspectable services and code.

Single-box appliance deployment

One machine running engine, control plane, and operator surfaces together when simplicity matters more than role separation.

Split-node system design

Separate audio execution from control and monitoring when lower jitter and clearer service boundaries are required.

Long-run qualification

A practical target for validating xruns, callback drift, hardware behavior, and unattended runtime stability over sustained sessions.

Inspectable open appliance

A system that behaves like dedicated hardware while keeping the entire signal path, service graph, and control model visible.

License and open-source compliance

The licensing section reflects the repository’s declared posture. MAP2-owned code and documentation are licensed under the GNU Affero General Public License v3.0 (`AGPL-3.0-only`) unless a file states otherwise. Third-party components retain their original licenses.

Primary license

AGPLv3 for MAP2-owned code

The top-level MAP2 repository license declares `AGPL-3.0-only` for MAP2-owned repository code and documentation unless a file explicitly states otherwise.

Network use

Source availability matters

The repository README explicitly notes that if you modify and run the software for users over a network, you must provide the corresponding source code for that running version as required by AGPLv3.

  • Why it matters: hosted or networked MAP2 deployments still carry source-disclosure obligations.
  • Practical path: keep your running fork, modifications, and notices available to users of that deployed version.
Third-party notices

Third-party licenses stay in force

MAP2 does not relicense third-party code. Components like NeuralAmpModelerCore, PiPedal-derived UI code, JUCE, fonts, and other dependencies remain under their own original licenses and terms.

Built on the shoulders of Giants

This section mirrors the project foundations credited in the MAP platform About GUI. MAP2 depends on open ecosystems across audio, Linux, APIs, UI, machine learning, routing, accessibility, and scientific computing.