← Back to all posts

A Sample Blog Post: Writing Math and Code

February 23, 2026

This is a sample blog post to demonstrate the formatting available for research notes. You can write regular paragraphs with links, bold text, and italic text.

Inline and Display Math

You can write inline math like $f(x) = \sum_{i=1}^{n} w_i x_i + b$ within a sentence. For display equations, use double dollar signs:

$$\min_{\theta} \mathcal{L}(\theta) = \frac{1}{n} \sum_{i=1}^{n} \ell(f_\theta(x_i), y_i) + \lambda \|\theta\|_2^2$$

Multi-line equations also work:

$$\begin{aligned} \nabla_\theta \mathcal{L}(\theta) &= \frac{1}{n} \sum_{i=1}^{n} \nabla_\theta \ell(f_\theta(x_i), y_i) + 2\lambda \theta \\ \theta_{t+1} &= \theta_t - \eta \nabla_\theta \mathcal{L}(\theta_t) \end{aligned}$$

Code Blocks

Use <pre><code> tags for code blocks:

import torch
import torch.nn as nn

class LoopedTransformer(nn.Module):
    def __init__(self, d_model, nhead, num_layers, num_loops):
        super().__init__()
        self.num_loops = num_loops
        layer = nn.TransformerEncoderLayer(d_model, nhead)
        self.encoder = nn.TransformerEncoder(layer, num_layers)

    def forward(self, x):
        for _ in range(self.num_loops):
            x = self.encoder(x)
        return x

You can also use inline code like model.train() within text.

Lists and Structure

Unordered list:

Ordered list:

  1. Initialize parameters $\theta_0$
  2. Compute gradient $\nabla \mathcal{L}(\theta_t)$
  3. Update $\theta_{t+1} = \theta_t - \eta \nabla \mathcal{L}(\theta_t)$

Images

You can include figures by placing images in the images/ folder:

References

Link to papers, e.g., see our work on Looped Transformers.