Analyzing the running time helps you understand the efficiency of an algorithm and make informed decisions about its suitability for different problem sizes. It’s an essential skill for algorithm design and optimization. Analyzing Iterative algorithms involves evaluating the time complexity of the algorithm, which describes how the execution time scales with the input size.
Here are steps to analyze the running time of an iterative algorithm:
- Identify the Basic Operation: Determine the fundamental operation(s) that are executed repeatedly in the algorithm. This operation is typically what consumes the most time in the algorithm.
- Count the Operations: Count how many times the basic operation(s) are executed as a function of the input size. This step involves examining loops, conditionals, and recursive calls.
- Express the Running Time: Express the number of basic operations as a mathematical function of the input size (n). This function is often represented using Big O notation, which provides an upper bound on the growth rate of the running time.
- Analyze the Worst Case: In many cases, it’s important to analyze the worst-case scenario, where the input data is arranged in a way that maximizes the number of operations. This provides an upper bound on the running time for all inputs.
- Simplify the Expression: Simplify the mathematical expression representing the running time to its most dominant term. This is typically the term that grows the fastest as n increases.
Here are some common time complexities
- O(1): Constant time. The running time is constant and does not depend on the input size. Example: Accessing an element in an array by index.
- O(log n): Logarithmic time. The running time grows slowly as the input size increases. Example: Binary search in a sorted list.
- O(n): Linear time. The running time scales linearly with the input size. Example: Linear search in an unsorted list.
- O(n log n): Linearithmic time. Common in efficient sorting algorithms like merge sort and quicksort.
- O(n^2): Quadratic time. The running time grows quadratically with the input size. Example: Selection sort.
- O(n^k): Polynomial time. Higher values of k indicate higher polynomial time complexity.
- O(2^n): Exponential time. The running time grows rapidly with the input size. Example: Brute force solutions to certain problems.
Example – Calculate the sum of the first n positive integers.
Here’s the Python code for this program:

Steps to analyze the running time of this iterative program:
- Identify the Basic Operation: In this program, the basic operation is the addition operation (
result += i) inside the loop. This operation is executed repeatedly for each value ofi. - Count the Operations: The loop runs from
1ton(inclusive), so the addition operation is executedntimes. - Express the Running Time: The running time is expressed as a function of
n, wherenis the input size. In this case, the number of operations is directly proportional ton. - Analyze the Worst Case: In this program, there is no worst-case scenario, as the number of operations is the same for all inputs. Therefore, the best-case, worst-case, and average-case time complexities are all the same.
- Simplify the Expression: The expression representing the running time is
O(n)because the number of operations is directly proportional to the input size.
In summary, the running time of the given iterative program that calculates the sum of the first n positive integers is analyzed to be O(n). This means that the running time grows linearly with the size of the input n, making it an efficient algorithm for this particular task.