Understanding Linear Regression: From Theory to Implementation

Part 1 of the Deep Learning Series — November 8, 2025

Introduction

Linear regression is one of the fundamental algorithms in machine learning. It serves as a building block for understanding more complex neural networks and deep learning concepts.

Mathematical Formulation

Given input features x and target values y, we want to find parameters w (weights) and b (bias) such that:

ŷ = wx + b

The loss function (Mean Squared Error) is:

L(w,b) = 1/n ∑(y - ŷ)²

Gradient Descent

To find the optimal parameters, we use gradient descent. The update rules are:

w = w - α * ∂L/∂w

b = b - α * ∂L/∂b

where α is the learning rate

Python Implementation

import numpy as np

class LinearRegression:
    def __init__(self, learning_rate=0.01):
        self.lr = learning_rate
        self.w = None
        self.b = None

    def fit(self, X, y, epochs=1000):
        # Initialize parameters
        self.w = np.random.randn(X.shape[1])
        self.b = 0
        
        for _ in range(epochs):
            # Forward pass
            y_pred = np.dot(X, self.w) + self.b
            
            # Compute gradients
            dw = (1/X.shape[0]) * np.dot(X.T, (y_pred - y))
            db = (1/X.shape[0]) * np.sum(y_pred - y)
            
            # Update parameters
            self.w -= self.lr * dw
            self.b -= self.lr * db

Key Concepts

Deep Learning Series

Next post coming soon: Neural Networks Basics

← Back to blog