🎯 Audio Engine Philosophy

FlowState's audio engine prioritizes low latency, stability, and hip-hop-specific workflows. We build on proven open-source foundations rather than reinventing the wheel.

πŸ’‘
Core Stack: Tone.js for audio scheduling, Web Audio API for DSP, PIXI.js for timeline rendering, Zustand for state management.

πŸ“Š Technology Stack

Layer Technology Purpose
Audio Scheduling Tone.js Transport, timing, sequencing
Audio Processing Web Audio API Effects, mixing, routing
DSP Heavy Lifting AudioWorklet + WASM Custom processors, plugins
Timeline Rendering PIXI.js v8 WebGL canvas, waveforms
State Management Zustand Project state, undo/redo
UI Framework React 18+ Components, interactions

πŸ”Š Audio Signal Flow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Source β”‚ β”‚ Track β”‚ β”‚ Master β”‚ β”‚ Output β”‚ β”‚ (Sample/ │───▢│ Channel │───▢│ Bus │───▢│ (Speakers) β”‚ β”‚ Synth) β”‚ β”‚ (Effects) β”‚ β”‚ (Limiter) β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Send β”‚ β”‚ Effects β”‚ β”‚ (Reverb/DLY)β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

⚑ Latency Targets

Operation Target Maximum Strategy
Audio playback <10ms 20ms 128 sample buffer
Pad trigger <5ms 10ms Pre-loaded buffers
Effect change <1ms 5ms AudioParam ramping
Timeline scroll 60fps 30fps WebGL rendering
Waveform draw <100ms 500ms Pre-computed peaks

🎹 Core Components

1. Transport Controller

// transport.ts
import * as Tone from 'tone';

class TransportController {
  private transport = Tone.getTransport();

  play() {
    this.transport.start();
  }

  stop() {
    this.transport.stop();
    this.transport.position = 0;
  }

  pause() {
    this.transport.pause();
  }

  setBPM(bpm: number) {
    this.transport.bpm.value = bpm;
  }

  setLoop(start: string, end: string) {
    this.transport.loop = true;
    this.transport.loopStart = start;
    this.transport.loopEnd = end;
  }

  get position() {
    return this.transport.position;
  }
}

2. Track Manager

// track.ts
interface Track {
  id: string;
  name: string;
  type: 'audio' | 'midi' | 'drum';
  volume: number;
  pan: number;
  mute: boolean;
  solo: boolean;
  clips: Clip[];
  effects: Effect[];
  channel: Tone.Channel;
}

class TrackManager {
  private tracks: Map<string, Track> = new Map();
  private masterChannel: Tone.Channel;

  constructor() {
    this.masterChannel = new Tone.Channel().toDestination();
    // Add limiter to prevent clipping
    new Tone.Limiter(-1).connect(this.masterChannel);
  }

  createTrack(type: Track['type']): Track {
    const id = crypto.randomUUID();
    const channel = new Tone.Channel().connect(this.masterChannel);

    const track: Track = {
      id,
      name: `Track ${this.tracks.size + 1}`,
      type,
      volume: 0,
      pan: 0,
      mute: false,
      solo: false,
      clips: [],
      effects: [],
      channel
    };

    this.tracks.set(id, track);
    return track;
  }
}

3. Drum Machine (MPC-Style)

// drum-machine.ts
class DrumMachine {
  private pads: Map<number, Tone.Player> = new Map();
  private sequence: Tone.Sequence | null = null;
  private pattern: boolean[][] = Array(16).fill(null).map(() => Array(16).fill(false));

  async loadKit(samples: string[]) {
    for (let i = 0; i < Math.min(samples.length, 16); i++) {
      const player = new Tone.Player(samples[i]).toDestination();
      await player.load(samples[i]);
      this.pads.set(i, player);
    }
  }

  triggerPad(padIndex: number) {
    const player = this.pads.get(padIndex);
    if (player?.loaded) {
      player.start();
    }
  }

  setPattern(step: number, pad: number, active: boolean) {
    this.pattern[step][pad] = active;
    this.updateSequence();
  }

  private updateSequence() {
    if (this.sequence) {
      this.sequence.dispose();
    }

    this.sequence = new Tone.Sequence(
      (time, step) => {
        for (let pad = 0; pad < 16; pad++) {
          if (this.pattern[step][pad]) {
            this.pads.get(pad)?.start(time);
          }
        }
      },
      [...Array(16).keys()],
      '16n'
    ).start(0);
  }
}

πŸ–ΌοΈ Timeline Rendering (PIXI.js)

// timeline.ts
import * as PIXI from 'pixi.js';

class TimelineRenderer {
  private app: PIXI.Application;
  private waveformContainer: PIXI.Container;
  private playheadLine: PIXI.Graphics;

  async init(canvas: HTMLCanvasElement) {
    this.app = new PIXI.Application();
    await this.app.init({
      canvas,
      width: canvas.width,
      height: canvas.height,
      backgroundColor: 0x1e293b,
      antialias: true,
      resolution: window.devicePixelRatio
    });

    this.waveformContainer = new PIXI.Container();
    this.app.stage.addChild(this.waveformContainer);

    this.playheadLine = new PIXI.Graphics();
    this.app.stage.addChild(this.playheadLine);
  }

  drawWaveform(audioBuffer: AudioBuffer, trackY: number, color: number = 0x06b6d4) {
    const data = audioBuffer.getChannelData(0);
    const peaks = this.computePeaks(data, this.app.screen.width);

    const graphics = new PIXI.Graphics();
    graphics.moveTo(0, trackY);

    for (let i = 0; i < peaks.length; i++) {
      const x = i;
      const y = trackY + peaks[i] * 30;
      graphics.lineTo(x, y);
    }

    graphics.stroke({ color, width: 1 });
    this.waveformContainer.addChild(graphics);
  }

  private computePeaks(data: Float32Array, width: number): number[] {
    const peaks: number[] = [];
    const samplesPerPixel = Math.floor(data.length / width);

    for (let i = 0; i < width; i++) {
      let max = 0;
      const start = i * samplesPerPixel;
      for (let j = start; j < start + samplesPerPixel && j < data.length; j++) {
        max = Math.max(max, Math.abs(data[j]));
      }
      peaks.push(max);
    }

    return peaks;
  }

  updatePlayhead(position: number, pixelsPerSecond: number) {
    const x = position * pixelsPerSecond;
    this.playheadLine.clear();
    this.playheadLine.rect(x, 0, 2, this.app.screen.height);
    this.playheadLine.fill(0xf43f5e);
  }
}

πŸ”Œ Effects Chain

Effect Tone.js Class Use Case
EQ Tone.EQ3 Shape frequency balance
Compressor Tone.Compressor Dynamic control
Reverb Tone.Reverb Space and depth
Delay Tone.FeedbackDelay Echo effects
Filter Tone.Filter Low/high pass
Distortion Tone.Distortion Saturation/overdrive
Limiter Tone.Limiter Prevent clipping

Effect Chain Implementation

// effects.ts
class EffectsChain {
  private effects: Tone.ToneAudioNode[] = [];
  private input: Tone.Gain;
  private output: Tone.Gain;

  constructor() {
    this.input = new Tone.Gain();
    this.output = new Tone.Gain();
    this.input.connect(this.output);
  }

  addEffect(effect: Tone.ToneAudioNode, index?: number) {
    if (index === undefined) {
      this.effects.push(effect);
    } else {
      this.effects.splice(index, 0, effect);
    }
    this.rebuildChain();
  }

  removeEffect(index: number) {
    const [removed] = this.effects.splice(index, 1);
    removed.dispose();
    this.rebuildChain();
  }

  private rebuildChain() {
    // Disconnect all
    this.input.disconnect();
    this.effects.forEach(e => e.disconnect());

    // Reconnect in order
    let current: Tone.ToneAudioNode = this.input;
    for (const effect of this.effects) {
      current.connect(effect);
      current = effect;
    }
    current.connect(this.output);
  }
}

πŸ’Ύ Project State (Zustand)

// store/project.ts
import { create } from 'zustand';
import { persist } from 'zustand/middleware';

interface ProjectState {
  id: string;
  name: string;
  tempo: number;
  key: string | null;
  timeSignature: [number, number];
  tracks: Track[];

  // Actions
  setTempo: (bpm: number) => void;
  addTrack: (type: Track['type']) => void;
  removeTrack: (id: string) => void;
  updateTrack: (id: string, updates: Partial<Track>) => void;
  undo: () => void;
  redo: () => void;
}

const useProjectStore = create<ProjectState>()(
  persist(
    (set, get) => ({
      id: crypto.randomUUID(),
      name: 'Untitled Project',
      tempo: 90,
      key: null,
      timeSignature: [4, 4],
      tracks: [],

      setTempo: (bpm) => set({ tempo: bpm }),

      addTrack: (type) => set((state) => ({
        tracks: [...state.tracks, createTrack(type)]
      })),

      removeTrack: (id) => set((state) => ({
        tracks: state.tracks.filter(t => t.id !== id)
      })),

      updateTrack: (id, updates) => set((state) => ({
        tracks: state.tracks.map(t =>
          t.id === id ? { ...t, ...updates } : t
        )
      })),

      undo: () => { /* temporal middleware */ },
      redo: () => { /* temporal middleware */ }
    }),
    { name: 'flowstate-project' }
  )
);

🎡 Audio File Support

Format Import Export Notes
WAV βœ… βœ… Lossless, primary format
MP3 βœ… βœ… Compressed export option
OGG βœ… βœ… Open format alternative
FLAC βœ… ❌ Import only (browser limitation)
AIFF βœ… ❌ Import only

Export Implementation

// export.ts
async function exportProject(format: 'wav' | 'mp3'): Promise<Blob> {
  const offlineContext = new OfflineAudioContext(
    2,  // stereo
    44100 * projectDuration,
    44100
  );

  // Render all tracks to offline context
  await renderTracksToContext(offlineContext);

  const audioBuffer = await offlineContext.startRendering();

  if (format === 'wav') {
    return audioBufferToWav(audioBuffer);
  } else {
    // Use lamejs for MP3 encoding
    return audioBufferToMp3(audioBuffer);
  }
}

function audioBufferToWav(buffer: AudioBuffer): Blob {
  const interleaved = interleaveChannels(buffer);
  const wavData = encodeWAV(interleaved, buffer.sampleRate);
  return new Blob([wavData], { type: 'audio/wav' });
}

πŸš€ Performance Optimizations

Technique Benefit Implementation
Sample Pre-loading Zero trigger latency Load all samples on project open
Waveform Caching Instant timeline draw Store peaks in IndexedDB
Track Freezing Reduce CPU load Bounce to audio buffer
Web Workers Non-blocking FFT Offload analysis to worker
AudioWorklet Real-time DSP Custom audio processing
WASM Effects Near-native performance faust2wasm compilers

πŸŽ›οΈ MVP Track Limits

Tier Tracks Reason
Free 8 tracks Sufficient for basic beats
Pro 32 tracks Full production capability
Enterprise Unlimited Complex projects
πŸ’‘
Hip-Hop Reality: Most beats use 6-12 tracks. The 8-track free tier covers drums, bass, melody, pads, and vocalsβ€”perfect for the target audience.