Writing Efficient Algorithms: Time Complexity and Space Optimization.

Pawan natekar
6 min readOct 15, 2024

--

Generated by Pawan Natekar

A lot of what you do when you write algorithms is optimizing the performance of your applications in this way, especially if you’re going to be applying your algorithms to big datasets, or they’re operating in systems that have important real-time constraints. The concepts of **time complexity** and **space complexity** can be used to measure how efficient your algorithms are. Knowledge of measuring and optimizing both time and space is basically what it takes to greatly improve your overall code’s efficiency.

In this blog post, we’re going to cover what time and space complexities are, how to analyze them, and techniques for optimizing algorithms. We also see common examples of time complexity and ways to minimize both time and space usage in your programs.

— -
Understanding Time Complexity

Time complexity: the running time of an algorithm as the size of input increases. It is used to describe the performance of an algorithm as a function of the size of its input.

Time complexity is often expressed using Big O notation, that provides an upper bound on the growth rate of the execution time. Some of the most common notations are:

- O(1): Constant time; The execution time does not depend upon the size of input
- O(log n): Execution time grows in proportion to logarithm; the execution time is directly proportional to logarithm size of the input
- O(n): Execution varies directly with size of the input. That is, execution time is directly proportional to the size of the input.
- O(n log n): Execution time is linear to a product of n and log n.
- O(n²) : Quadratic time: execution time grows quadratically with the input size.
- O(2ⁿ) : Exponential time: execution time doubles with each element in the input added.

Example 1: Constant Time (O(1))

An algorithm of constant time complexity takes the same time always, regardless of the size of the input. Example: access the element in the array by its index

```
def get_first_element(arr):
.
return arr[0] # O(1) operation
.

In this example, accessing the first element does not take any additional time with larger arrays.

Example 2: Linear Time (O(n))
It is directly proportional to the size of the input. So if one likes to write a loop that runs for every element in an array of size `n` then
```python
def print_elements(arr):
for element in arr:
```
print(element) # O(n) operation
```
So if the array has 10 elements then loop runs 10 times if contains 100 elements then runs 100 times.

— -
Typical Time Complexities

We find out the typical time complexities of commonly occurring algorithms next.

| Time Complexity | Example Algorithm | Description |
| — — — — — — — — — — -| — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -| — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -|
| O(1) | Access an element of an array | Constant time operation.
| O(log n) | Binary search | Reduces problem space by half in each step.
| O(n) | Linear search that describes the scanning of a list | It scales with the input size.
| O(n log n) | Merge sort, quicksort (average case) | Log-linear time for most efficient sorting. |
| O(n²) | Bubble sort, selection sort | Quadratic time; not very efficient for large inputs.
| O(2n) | Recursive Fibonacci algorithm | Exponential time, grows very fast with n. |

We recall that when we are analyzing time-complexity, we just trace how the number of operations grows with an increase of the input size, and are unconcerned with constant factors or lower terms.

— -
What is Space Complexity?

Actually, Space complexity is the amount of memory an algorithm needs in terms of input sizes. Like the time complexity, the space complexity is also represented in Big O notation. Its usage is highly beneficial in cases of memory-constrained environments or dealing with large datasets that are massive in size.

Therefore, for example, the recursive algorithm can use more space because it has to retain the list of calls on the stack, whereas the iterative one, performing exactly the same thing, can be more space-efficient.

Iterative vs Recursive Fibonacci

Fibonacci has higher space complexity due to the recursion stack in this example

```python
def fibonacci_recursive(n):
if n <= 1:
return n
else:
return fibonacci_recursive(n — 1) + fibonacci_recursive(n — 2)
```
return fibonacci_recursive(n-1) + fibonacci_recursive(n-2)
```
The iterative version, consequently, requires less space for the same reason it does not utilize recursion-it does not maintain any states on the stack. Here is an example of how the iterative version looks:
```python
def fibonacci_iterative(n):
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
END
return a
```

— -
Time Complexity Optimisation
Let us discuss some typical techniques to decrease time complexity of your algorithms.

1. Data Structures Optimizations

Most of the time, using the best data structure can reduce the time complexity dramatically. For example, we would use a dictionary/hash table instead of a list. Average O(1) for insertions and lookups; in contrast, a list’s is O(n)

```python Inefficient (O(n) for search)
def find_element(arr, target):
for element in list :
if element == target:
return True
return FalseEfficient: on average O(1) for search
def find_element_optimized(hash_map, target):
return target in hash_map
````

In the divide and conquer approach, a problem gets split into smaller subproblems and then solved recursively and finally compounded from all those solutions. Often, it is used in efficient algorithms like merge sort and binary search.

Binary Search is perhaps the most common example of an algorithm that happens to have logarithmic time complexity, O(log n). Instead of checking the whole array, it cuts approximately half with each comparison.

```python
def binary_search(arr, target):
```
low, high = 0, len(arr) — 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
```
else:
high = mid — 1
return -1
```
END

Dynamic Programming (DP) is an optimization technique over recursive algorithms, storing the results of overlapping subproblems so that redundant computation is avoided.

For example, Fibonacci is a time complexity of O(2ⁿ) in the recursive approach, but if we make a call to DP, the time complexity will reduce to be O(n):
for i in range(2, n + 1):
dp[i] = dp[i — 1] + dp[i — 2]
return dp[n]
Space Complexity Optimization
Let’s also talk about the optimization techniques for the space complexity, too:
1. In-place Algorithms
In-place algorithm
The in-place algorithm is that which modifies the input data itself without using any additional space to support an auxiliary data structure. It decreases the space complexity from O(n) to O(1).
.
Example: In-place List Reversal
.
.
(END
arr[left], arr[right] = arr[right], arr[left]
left += 1
right -= 1
```
END

It’s called tail recursion where recursive call to the function is its last action. Some compilers or interpreters do further optimize this kind of recursion such that extra stack frames are not really needed; hence it is O(1) in space complexity.

```
def factorial_tail_recursive(n, accumulator=1):
if n == 0:
return accumulator
```
return factorial_tail_recursive(n — 1, n * accumulator)
```
3. Eliminate Unnecessary Data Structures

Use the minimum number of auxiliary data structures such as lists, dictionaries, or queues unless they are really needed to solve the problem. In addition, do not create a copy of data when it is unnecessary since that introduces both extra time and space complexities.

Real Life Evaluation and Optimization of Algorithms

1. Find Bottlenecks: Select your algorithm and identify the places where it spends most of the timescales and where most memory is used.
2. Improve Bad Code: Find ways to replace bad algorithms with something less terrible, for example, replace nested loops with a hash map.
3. Try Large Inputs: Test your optimizations against real large input sizes.

Algorithm optimization is a balance between the improvement of running time and the reduction of space usage. Of course, you have always to look to write clean readable code and maintainable code. But when the performance gets critical, then knowing your time and space complexities really helps in making smarter choices.

In brief:
Time complexity is a measurement of how fast an algorithm runs as it grows on input size.
Space Complexity will give an idea of how much memory that algorithm is using.
You can try to optimize time complexity using smart data structures, divide and conquer strategies, and dynamic programming.
You have to use in-place algorithms, tail recursion, and minimize the usage of auxiliary data structures to get optimized for space complexity.

Mastering these concepts will eventually lead you to write more efficient and scalable algorithms that would improve the overall performance of your applications .

--

--