Technology Fundamentals

Big O Notation

Definition

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. In computer science, it is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

Why It Matters

Big O notation allows developers to analyze and compare the efficiency of algorithms. It helps in choosing the most efficient algorithm for a given problem, especially when dealing with large datasets.

Contextual Example

An algorithm with O(n) complexity (linear time) will take twice as long to run if the input size is doubled. An algorithm with O(n²) complexity (quadratic time) will take four times as long. For large inputs, the O(n) algorithm is far more efficient.

Common Misunderstandings

  • Big O describes the worst-case scenario or the upper bound of an algorithm's performance.
  • It doesn't measure the exact speed, but rather the growth rate of the time or space complexity.

Related Terms

Last Updated: December 17, 2025