RealLearningEngine
Production-ready neural network-based implementation of LearningEngine for Project KARL.
This class implements a Multi-Layer Perceptron (MLP) neural network that learns from user interactions to generate adaptive predictions and recommendations. The implementation provides real-time learning capabilities with persistent state management and comprehensive analytics.
Neural Network Architecture:
Input Layer (4 neurons) → Hidden Layer (8 neurons) → Output Layer (3 neurons)
↓ ↓ ↓
[action_type_hash] [tanh activation] [next_action_confidence]
[timestamp_normalized] [Xavier initialization] [timing_prediction]
[user_hash] [bias terms] [preference_score]
[context] [learning_rate=0.01] [sigmoid activation]Learning Algorithm:
Forward Propagation: Input → Hidden (tanh) → Output (sigmoid)
Backpropagation: Gradient descent with configurable learning rate
Weight Initialization: Xavier/Glorot uniform distribution for stable training
Error Function: Mean Squared Error (MSE) for continuous value prediction
Feature Engineering:
Action Type Encoding: Hash-based normalization to -1, 1 range
Temporal Features: Time-of-day normalization for circadian pattern learning
User Identification: Hash-based user encoding for personalization
Context Awareness: Binary context presence indicator for situational learning
Concurrency & Thread Safety:
Atomic Operations: AtomicBoolean for initialization state management
Mutex Protection: Mutex guards all neural network weight modifications
Coroutine Integration: All operations execute within provided CoroutineScope
Non-blocking Training: Asynchronous training step execution
State Persistence:
Serialization: Custom binary format for neural network weights and biases
State Recovery: Automatic restoration from KarlContainerState during initialization
Version Management: State versioning for backward compatibility
Memory Management: Bounded training history to prevent memory bloat
Performance Characteristics:
Training Complexity: O(n×m×k) where n=input, m=hidden, k=output neurons
Prediction Latency: O(1) forward pass with pre-trained weights
Memory Usage: O(n×m + m×k) for weight matrices plus bounded history
Convergence: Adaptive learning rate with momentum for stable convergence
Analytics & Monitoring:
Training Metrics: Loss tracking, training step counting, convergence monitoring
Prediction Quality: Confidence scoring, alternative suggestion ranking
User Insights: Interaction counting, preference learning, behavioral analysis
Visualization: Confidence history sparklines for trend analysis
Example Usage:
val engine = RealLearningEngine(learningRate = 0.01f, randomSeed = 42L)
engine.initialize(savedState, coroutineScope)
// Training from user interactions
val trainingJob = engine.trainStep(interactionData)
trainingJob.join()
// Generate predictions
val prediction = engine.predict(contextData, instructions)
println("Suggested action: ${prediction?.content} (${prediction?.confidence})")
// Monitor learning progress
val insights = engine.getLearningInsights()
println("Progress: ${insights.progressEstimate * 100}%")Since
1.0.0
Author
KARL AI Development Team
Parameters
Neural network learning rate controlling gradient descent step size. Typical values: 0.001-0.1. Higher values enable faster learning but may cause instability. Lower values provide stable but slow convergence.
Random seed for reproducible weight initialization and stochastic operations. Use fixed seed for testing/debugging, random seed for production diversity.
See also
The interface contract this implementation fulfills
The training data structure processed by this engine
The prediction output structure generated by this engine
The analytics data structure provided by this engine
Constructors
Creates a new RealLearningEngine with specified hyperparameters. The neural network is not initialized until initialize is called.
Types
Training example data structure for neural network learning.
Functions
Serializes complete neural network state for persistence and recovery.
Provides comprehensive analytics and performance insights about the learning process.
Returns a human-readable string describing the neural network architecture.
Initializes the neural network engine with optional state restoration.
Generates predictions for user behavior and action recommendations.
Executes asynchronous neural network training step from user interaction data.