Student Pass/Fail AI
Neural Network from Scratch in NumPy
Student Pass/Fail AI (Neural Network from Scratch in NumPy)
This project is a complete, functional Artificial Intelligence built from the ground up using only NumPy. It demonstrates the core mathematics and logic of deep learning without relying on any high-level libraries like TensorFlow or Keras.
The AI is trained to predict whether a student will "Pass" or "Fail" based on two features: their Score and their Study Hours.
Network Architecture (2-4-2)
This network uses a simple Multi-Layer Perceptron (MLP) architecture with one hidden layer. The flow of information is as follows:
[Input Layer] (2 Neurons)
( ) [Score]
( ) [Study Hours]
|
| (Weights: 2×4) - Connects 2 input neurons to 4 hidden neurons
v
[Hidden Layer] (4 Neurons) - Uses Sigmoid Activation
( )
( )
( )
( )
|
| (Weights: 4×2) - Connects 4 hidden neurons to 2 output neurons
v
[Output Layer] (2 Neurons) - Uses Softmax Activation
( ) [FAIL]
( ) [PASS]
Key Components Explained
🔢 Weights & Biases:
- Hidden Weights (2×4): Random values multiplied by 0.01 for small initial learning. Shape (2,4) connects input to hidden layer.
- Hidden Biases (4,): One bias per hidden neuron, starting from zero.
- Output Weights (4×2): Connects 4 hidden neurons to 2 output neurons.
- Output Biases (2,): One bias per output neuron, initialized to 0.01.
🎯 Activation Functions:
- Sigmoid: Squashes values to 0-1 range. Formula: 1 / (1 + e^(-x))
- Softmax: Converts output to probabilities that sum to 1. Perfect for classification!
- Sigmoid Derivative: Used in backpropagation: x * (1 - x)
Training Process (1000 Epochs)
Each epoch follows this cycle:
1️⃣ Forward Pass:
- Input → Hidden: hidden_inputs = (data × weights) + bias
- Apply Sigmoid: hidden_outputs = sigmoid(hidden_inputs) → Values like [0.8, 0.6, 0.9, 0.7]
- Hidden → Output: output_inputs = (hidden_outputs × weights) + bias
- Apply Softmax: output_probs = softmax(output_inputs) → [0.2, 0.8] (80% PASS, 20% FAIL)
2️⃣ Calculate Error (Cross-Entropy Loss):
- Compare prediction [0.2, 0.8] with truth [0, 1]
- Error = [0.2, -0.2]
- Loss = -mean(sum(labels × log(predictions)))
3️⃣ Backward Pass (Backpropagation):
- Output Layer: Calculate error at output neurons
- Update Output Weights: weights -= (learning_rate × gradient) / num_samples
- Update Output Biases: biases -= learning_rate × average_error
- Hidden Layer: Propagate error back to hidden neurons
- Update Hidden Weights & Biases: Same formula as output layer
Hyperparameters
- Learning Rate: 0.1 - Controls how much weights change in each update (moderate learning speed)
- Epochs: 1000 - Number of times the entire training data loops through the network
- Training Data: 5 samples - Students with scores and study hours
Example Prediction Flow
Input: Student with Score=90, Study Hours=20.5
Step 1: [90, 20.5] × weights + bias → hidden_inputs
Step 2: sigmoid(hidden_inputs) → [0.8, 0.6, 0.9, 0.7]
Step 3: [0.8, 0.6, 0.9, 0.7] × weights + bias → output_inputs
Step 4: softmax(output_inputs) → [0.2, 0.8]
Output: 80% PASS, 20% FAIL → Prediction: PASS ✅
Features
- ✅ Pure NumPy Implementation: No TensorFlow or Keras required
- ✅ Complete Training Loop: Forward pass, loss calculation, and backpropagation
- ✅ Live Predictions: Enter new student data and get instant predictions with confidence scores
- ✅ Loss Tracking: Monitor training progress every 200 epochs
- ✅ Matrix Operations: Efficient batch processing using NumPy arrays