RealLearningEngine

class RealLearningEngine(learningRate: Float = 0.01f, randomSeed: Long = 42) : LearningEngine

Production-ready neural network-based implementation of LearningEngine for Project KARL.

This class implements a Multi-Layer Perceptron (MLP) neural network that learns from user interactions to generate adaptive predictions and recommendations. The implementation provides real-time learning capabilities with persistent state management and comprehensive analytics.

Neural Network Architecture:

Input Layer (4 neurons)  →  Hidden Layer (8 neurons)  →  Output Layer (3 neurons)
↓ ↓ ↓
[action_type_hash] [tanh activation] [next_action_confidence]
[timestamp_normalized] [Xavier initialization] [timing_prediction]
[user_hash] [bias terms] [preference_score]
[context] [learning_rate=0.01] [sigmoid activation]

Learning Algorithm:

  • Forward Propagation: Input → Hidden (tanh) → Output (sigmoid)

  • Backpropagation: Gradient descent with configurable learning rate

  • Weight Initialization: Xavier/Glorot uniform distribution for stable training

  • Error Function: Mean Squared Error (MSE) for continuous value prediction

Feature Engineering:

  • Action Type Encoding: Hash-based normalization to -1, 1 range

  • Temporal Features: Time-of-day normalization for circadian pattern learning

  • User Identification: Hash-based user encoding for personalization

  • Context Awareness: Binary context presence indicator for situational learning

Concurrency & Thread Safety:

  • Atomic Operations: AtomicBoolean for initialization state management

  • Mutex Protection: Mutex guards all neural network weight modifications

  • Coroutine Integration: All operations execute within provided CoroutineScope

  • Non-blocking Training: Asynchronous training step execution

State Persistence:

  • Serialization: Custom binary format for neural network weights and biases

  • State Recovery: Automatic restoration from KarlContainerState during initialization

  • Version Management: State versioning for backward compatibility

  • Memory Management: Bounded training history to prevent memory bloat

Performance Characteristics:

  • Training Complexity: O(n×m×k) where n=input, m=hidden, k=output neurons

  • Prediction Latency: O(1) forward pass with pre-trained weights

  • Memory Usage: O(n×m + m×k) for weight matrices plus bounded history

  • Convergence: Adaptive learning rate with momentum for stable convergence

Analytics & Monitoring:

  • Training Metrics: Loss tracking, training step counting, convergence monitoring

  • Prediction Quality: Confidence scoring, alternative suggestion ranking

  • User Insights: Interaction counting, preference learning, behavioral analysis

  • Visualization: Confidence history sparklines for trend analysis

Example Usage:

val engine = RealLearningEngine(learningRate = 0.01f, randomSeed = 42L)
engine.initialize(savedState, coroutineScope)

// Training from user interactions
val trainingJob = engine.trainStep(interactionData)
trainingJob.join()

// Generate predictions
val prediction = engine.predict(contextData, instructions)
println("Suggested action: ${prediction?.content} (${prediction?.confidence})")

// Monitor learning progress
val insights = engine.getLearningInsights()
println("Progress: ${insights.progressEstimate * 100}%")

Since

1.0.0

Author

KARL AI Development Team

Parameters

learningRate

Neural network learning rate controlling gradient descent step size. Typical values: 0.001-0.1. Higher values enable faster learning but may cause instability. Lower values provide stable but slow convergence.

randomSeed

Random seed for reproducible weight initialization and stochastic operations. Use fixed seed for testing/debugging, random seed for production diversity.

See also

The interface contract this implementation fulfills

The training data structure processed by this engine

The prediction output structure generated by this engine

The analytics data structure provided by this engine

Constructors

Link copied to clipboard
constructor(learningRate: Float = 0.01f, randomSeed: Long = 42)

Creates a new RealLearningEngine with specified hyperparameters. The neural network is not initialized until initialize is called.

Types

Link copied to clipboard
data class TrainingExample(val input: FloatArray, val expectedOutput: FloatArray, val timestamp: Long)

Training example data structure for neural network learning.

Functions

Link copied to clipboard
open suspend override fun getCurrentState(): KarlContainerState

Serializes complete neural network state for persistence and recovery.

Link copied to clipboard
open suspend override fun getLearningInsights(): LearningInsights

Provides comprehensive analytics and performance insights about the learning process.

Link copied to clipboard
open override fun getModelArchitectureName(): String

Returns a human-readable string describing the neural network architecture.

Link copied to clipboard
open suspend override fun initialize(state: KarlContainerState?, coroutineScope: CoroutineScope)

Initializes the neural network engine with optional state restoration.

Link copied to clipboard
open suspend override fun predict(contextData: List<InteractionData>, instructions: List<KarlInstruction>): Prediction?

Generates predictions for user behavior and action recommendations.

Link copied to clipboard
open suspend override fun release()

Releases neural network resources and performs cleanup operations.

Link copied to clipboard
open suspend override fun reset()

Resets the neural network to its initial untrained state.

Link copied to clipboard
open override fun trainStep(data: InteractionData): Job

Executes asynchronous neural network training step from user interaction data.