What is Time complexity?
Time complexity is defined as the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm. It is not going to examine the total execution time of an algorithm. Rather, it is going to give information about the variation (increase or decrease) in execution time when the number of operations (increase or decrease) in an algorithm. Yes, as the definition says, the amount of time taken is a function of the length of input only.


Time Complexity Introduction
Space and Time define any physical object in the Universe. Similarly, Space and Time complexity can define the effectiveness of an algorithm. While we know there is more than one way to solve the problem in programming, knowing how the algorithm works efficiently can add value to the way we do programming. To find the effectiveness of the program/algorithm, knowing how to evaluate them using Space and Time complexity can make the program behave in required optimal conditions, and by doing so, it makes us efficient programmers.
While we reserve the space to understand Space complexity for the future, let us focus on Time complexity in this post. Time is Money! In this post, you will discover a gentle introduction to the Time complexity of an algorithm, and how to evaluate a program based on Time complexity.
Let’s get started.
Why is Time complexity Significant?
Let us first understand what defines an algorithm.
An Algorithm, in computer programming, is a finite sequence of well-defined instructions, typically executed in a computer, to solve a class of problems or to perform a common task. Based on the definition, there needs to be a sequence of defined instructions that have to be given to the computer to execute an algorithm/ perform a specific task. In this context, variation can occur the way how the instructions are defined. There can be any number of ways, a specific set of instructions can be defined to perform the same task. Also, with options available to choose any one of the available programming languages, the instructions can take any form of syntax along with the performance boundaries of the chosen programming language. We also indicated the algorithm to be performed in a computer, which leads to the next variation, in terms of the operating system, processor, hardware, etc. that are used, which can also influence the way an algorithm can be performed.
Now that we know different factors can influence the outcome of an algorithm being executed, it is wise to understand how efficiently such programs are used to perform a task. To gauge this, we require to evaluate both the Space and Time complexity of an algorithm.
By definition, the Space complexity of an algorithm quantifies the amount of space or memory taken by an algorithm to run as a function of the length of the input. While Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input. Now that we know why Time complexity is so significant, it is time to understand what is time complexity and how to evaluate it.
Python is a great tool to implement algorithms if you wish to become a programmer. Take up the Machine Learning Certificate Course and enhance your skills to power ahead in your career.
To elaborate, Time complexity measures the time taken to execute each statement of code in an algorithm. If a statement is set to execute repeatedly then the number of times that statement gets executed is equal to N multiplied by the time required to run that function each time.
The first algorithm is defined to print the statement only once. The time taken to execute is shown as 0 nanoseconds. While the second algorithm is defined to print the same statement but this time it is set to run the same statement in FOR loop 10 times. In the second algorithm, the time taken to execute both the line of code – FOR loop and print statement, is 2 milliseconds. And, the time taken increases, as the N value increases, since the statement is going to get executed N times.
Note: This code is run in Python-Jupyter Notebook with Windows 64-bit OS + processor Intel Core i7 ~ 2.4GHz. The above time value can vary with different hardware, with different OS and in different programming languages, if used.
By now, you could have concluded that when an algorithm uses statements that get executed only once, will always require the same amount of time, and when the statement is in loop condition, the time required increases depending on the number of times the loop is set to run. And, when an algorithm has a combination of both single executed statements and LOOP statements or with nested LOOP statements, the time increases proportionately, based on the number of times each statement gets executed.
This leads us to ask the next question, about how to determine the relationship between the input and time, given a statement in an algorithm. To define this, we are going to see how each statement gets an order of notation to describe time complexity, which is called Big O Notation.
What are the Different Types of Time Complexity Notation Used?
As we have seen, Time complexity is given by time as a function of the length of the input. And, there exists a relation between the input data size (n) and the number of operations performed (N) with respect to time. This relation is denoted as the Order of growth in Time complexity and given notation O[n] where O is the order of growth and n is the length of the input.
Also known as ‘Big O Notation’, it expresses the runtime of an algorithm in terms of how quickly it grows relative to the input ‘n’ by defining the number of operations done on it. The time complexity of an algorithm is denoted by the combination of all O[n] assigned for each line of the function.
Different types of time complexities are used, such as Constant time (O(1)), Linear time (O(n)), Logarithmic time (O(log n)), Quadratic time (O(n^2)), Cubic time (O(n^3)), and more complex notations like Exponential time, Quasilinear time, factorial time, etc., based on the type of functions defined.
An algorithm has constant time complexity with order O(1) when it is not dependent on the input size ‘n’. The runtime remains the same regardless of the input size.
Linear time complexity, denoted by O(n), occurs when the running time increases linearly with the length of the input.
Logarithmic time complexity, denoted by O(log n), occurs when the size of the input data reduces in each step, such as in binary trees or binary search functions.
Quadratic time complexity, denoted by O(n^2), is non-linear where the running time increases non-linearly with the input length. Nested loops are common in this order.
To calculate time complexity, evaluate the order notation for each operation and input size of the algorithm, and compute the total runtime required for a given n. An example algorithm is provided to illustrate this process.
For every element in the result list, they are added together to give the final answer.
Let’s assume the cost function C represents the unit time taken to run a function, while ‘n’ represents the number of times the statement is defined to run in an algorithm.
For example, if the time taken to run the print function is 1 microsecond (C), and if the algorithm is defined to run the print function 1000 times (n), then the total run time would be (C * n) = 1 microsec * 1000 = 1 millisecond.
The run time for each line is given by:
Line 1 = C1 * 1
Line 2 = C2 * 1
Line 3,4,5 = (C3 * 1) + (C3 * 1) + (C3 * 1)
Line 6,7,8 = (C4 * [n+1]) * (C4 * [n+1]) * (C4 * [n+1])
Line 9 = C4 * [n]
Line 10 = C5 * 1
Line 11 = C2 * 1
Line 12 = C4 * [n+1]
Line 13 = C4 * [n]
Line 14 = C2 * 1
Line 15 = C6 * 1
The total run time is:
Total run time = (C1 * 1) + 3(C2 * 1) + 3(C3 * 1) + (C4 * [n+1]) * (C4 * [n+1]) * (C4 * [n+1]) + (C4 * [n]) + (C5 * 1) + (C4 * [n+1]) + (C4 * [n]) + (C6 * 1)
Replacing all costs with C to estimate the Order of notation:
Total Run Time = 7C + ((n^3)C + 3(n^2)C + 3nC + C + 3nC + 3C
= 12C + (n^3)C + 3(n^2)C + 6nC
= C(n^3) + C(n^2) + C(n) + C
= O(n^3) + O(n^2) + O(n) + O(1)
By replacing all cost functions with C, we can determine the order of time complexity of the algorithm. The final equation shows that the run time varies with the polynomial function of input size ‘n’ in cubic, quadratic, and linear forms.
This is how the order is evaluated for any given algorithm to estimate how it scales in terms of runtime with changes in the input size. It’s important to note that in real-time, the actual values for each cost function need to be known to calculate the exact run time of an algorithm given the input value ‘n’.
Time Complexity of Popular Algorithms
Sorting Algorithms
- Quick Sort: O(n log n) complexity.
- Merge Sort: O(n log n) complexity.
- Bubble Sort: O(n^2) complexity.
Search Algorithms
- Binary Search: O(log n) complexity.
- Linear Search: O(n) complexity.
Space Complexity vs. Time Complexity
While time complexity focuses on the time an algorithm takes, space complexity deals with the amount of memory it requires. There is often a trade-off between the two, where improving one may adversely affect the other.
Time Complexity of Sorting Algorithms
Understanding the time complexities of sorting algorithms helps in selecting the best sorting technique for a given situation.
Time Complexity of Insertion Sort
Best case: O(n), Worst case: O(n^2)
Time Complexity of Merge Sort
Best case: O(n log n), Worst case: O(n log n)
Time Complexity of Bubble Sort
Best case: O(n), Worst case: O(n^2)
Time Complexity of Quick Sort
Best case: O(n log n), Worst case: O(n^2)
Time Complexity of Searching Algorithms
Let’s explore the time complexities of some Searching Algorithms to understand their efficiency.
Time Complexity of Linear Search
Best case: O(1), Worst case: O(n)
Time Complexity of Binary Search
Best case: O(1), Worst case: O(log n)
Space Complexity
Space complexity is the amount of memory required by an algorithm, which is directly proportional to the input size. The lesser the space, the faster the algorithm executes. Time and space complexity are not related to each other.
Time Complexity Example
Consider a ride-sharing app like Uber or Lyft. When a user requests a ride, the app needs to find the nearest available driver to match the request.
This process entails searching through the available drivers’ locations to identify the one that is closest to the user’s location.
When it comes to time complexity, let’s examine two approaches for finding the nearest driver: a linear search approach and a more efficient spatial indexing approach.
1. **Linear Search Approach:** In a basic implementation, the app would iterate through the list of available drivers, calculate the distance between each driver’s location and the user’s location, and then select the driver with the shortest distance.
“`java
Driver findNearestDriver(List
Driver nearestDriver = null;
double minDistance = Double.MAX_VALUE;
for (Driver driver : drivers) {
double distance = calculateDistance(driver.getLocation(), userLocation);
if (distance < minDistance) {
minDistance = distance;
nearestDriver = driver;
}
}
return nearestDriver;
}
“`
The time complexity of this approach is O(n), where n is the number of available drivers. For a large number of drivers, the app’s performance might suffer, especially during peak times.
2. **Spatial Indexing Approach:** A more efficient approach involves using spatial indexing data structures like Quad Trees or K-D Trees. These structures partition the space into smaller regions, enabling faster searches based on spatial proximity.
“`java
Driver findNearestDriverWithSpatialIndex(SpatialIndex index, Location userLocation) {
Driver nearestDriver = index.findNearestDriver(userLocation);
return nearestDriver;
}
“`
The time complexity of this approach is typically better than O(n) as the search is guided by the spatial structure, eliminating the need to compare distances with all drivers. It could be closer to O(log n) or even better, depending on the specifics of the spatial index.
The contrast in time complexity between the linear search and spatial indexing approach highlights how algorithmic choices can significantly impact the real-time performance of critical operations in a ride-sharing app.
In conclusion, understanding time complexity is crucial when designing algorithms, especially in the ever-evolving world of big data. It aids in resource planning, efficient processing, and effective results. Mastering time complexity can make you a more efficient programmer. Happy Coding!
If you have any questions, feel free to drop them in the comments below, and we’ll respond promptly.