However, there always exists a more efficient solution. It is the attempt of uncovering the most efficient solution that gives rise to understanding time complexity.

Understanding Time Complexity, helps programmers build a strong foundation in Data Structures and Algorithms. It enables them to crack interviews with Top-Product and MAANG companies as this is often asked in Tech Interview questions.

Time Complexity is the amount of time an algorithm takes to run as a function of the length of the input. Its a measure of the efficiency of an algorithm. In this blog, we explore all the common types of Time Complexities with examples.

**Big O Notation**

Big O Notation describes the time taken for the execution of an algorithm based on the change in input size. It is a measure of the efficiency of an algorithm using space and time complexity.

**Big O Complexity Chart**

The Big O Graph expresses the performance of an algorithm as a function of its input size.

This is like a Time Complexity CheatSheet which outlines the performance of the various types of Time Complexities.

The Big O chart explains that O(1) or constant time is ideal, meaning the algorithm runs a single step. O(log n) is also efficient, among other time complexities shown.

O(1) - Excellent

O(log n) - Good

O(n) - Fair

O(n log n) - Bad

O(n^2), O(2^n) and O(n!) - Worst

If the calculation doesnt change with more input, it's constant time (O(1)). If the input is halved at each step, it's logarithmic (O(log n)). A single loop means linear time (O(n)). Two loops inside each other are quadratic time (O(n^2)). And if each new input doubles the work, that's exponential time (O(2^n)).

**Big O Time Complexity with Examples**

**Constant Time O(1)**

An algorithm has a constant time complexity as O(1) when there is no dependency on the input size. Here, the execution time is always the same irrespective of the input size.

Example - Checking whether a number is odd or even

number = 15

if number % 2 == 0:

print("Even")

else:

print("Odd")

**Linear Time Complexity O(n)**

An algorithm where the operations increase with the linear increase in the number of inputs is known to have linear time complexity. In short, the time taken is directly proportional to the size of the input.

Example - Linear Search

def linear_search(arr, target):

for i in range(len(arr)):

if arr[i] == target:

return i

return -1

array = [5, 3, 8, 6, 7, 2]

print(linear_search(array, 6)) # Returns the index of 6

**Logarithmic Time Complexity O(log n)**

Cases where the operation of the algorithm is reduced by a fraction(usually half) is considered to have a logarithmic time complexity, also known as divide and conquer strategies.

Example - Binary Search in a Sorted Array

def binary_search(arr, target):

low, high = 0, len(arr) - 1

while low <= high:

mid = (low + high) // 2

if arr[mid] == target:

return mid

elif arr[mid] < target:

low = mid + 1

else:

high = mid - 1

return -1

array = [1, 2, 4, 5, 8, 10]

print(binary_search(array, 5)) # Returns index of 5

**Quadratic Time O(n^2)**

When the time taken by an algorithm is proportional to the square of the input size, it is known to exhibit quadratic time complexity. Usually this is often seen in nested iterations.

Example - Checking for duplicates in an Array

def contains_duplicate(arr):

for i in range(len(arr)):

for j in range(i + 1, len(arr)):

if arr[i] == arr[j]:

return True

return False

array = [1, 2, 3, 4, 5, 1]

print(contains_duplicate(array)) # Returns True

**Exponential Time Complexity O(2^n)**

When an algorithm's runtime doubles with the addition of each input it is known to exhibit exponential time complexity. This is common in algorithms that involve recursive calls with multiple branches. This is because the algorithm explores every permutation or combination of the input data.

Example - Time Complexity Analysis of Fibonacci Series

def fibonacci(n):

if n <= 1:

return n

else:

return fibonacci(n-1) + fibonacci(n-2)

print(fibonacci(10)) # Calculates the 10th Fibonacci number

**Linearithmic Time Complexity O(n log n)**

This type of time complexity combines linear and logarithmic behaviour. Its most commonly observed in divide and conquer algorithms or efficient sorting algorithms.

Example - Time Complexity Analysis of Quick Sort

def quick_sort(arr):

if len(arr) <= 1:

return arr

else:

pivot = arr[0]

less = [x for x in arr[1:] if x <= pivot]

greater = [x for x in arr[1:] if x > pivot]

return quick_sort(less) + [pivot] + quick_sort(greater)

array = [10, 7, 8, 9, 1, 5]

sorted_array = quick_sort(array)

print("Sorted array:", sorted_array)

**Factorial Time Complexity O(n!)**

Factorial time complexity occurs when the number of operations increases factorially with the size of the input data. It is commonly seen in algorithms that need to generate all possible permutations or combinations of the input data.

Example - Generating all permutations of a string

from itertools import permutations

def all_permutations(s):

return [''.join(p) for p in permutations(s)

print(all_permutations("abc")) # Prints all permutations of "abc

These are the most common time complexities that one can encounter while working with algorithms.

What are some other examples that you can think of for the various types of Time complexities?

With HeyCoachs structured **DSA and System Design Course** learners are able to understand Time Complexity and complex DSA topics with ease. These topics are covered by MAANG coaches who give you insights about different projects that product companies work on. To know more, click here.

**Introduction**

For this DSA Stories insight, wed like to introduce Aayush Barthwal. He has previously worked at Oracle and Amazon.

We spoke to him in-depth about his journey and the areas where he has optimized code by applying his Data Structures and Algorithms skills.

While the application of Data Structures varies for companies and their products, startups are more open to experimentation to use linear data structures like queues, linked lists, and non-linear data structures like binary trees, graphs, and System Design principles in their code.

The usual brute-force approach is necessarily the first step taken to optimize solutions. For instance, if there are two for loops, can we use a single for loop to reduce computation time?

**Hierarchical Structure**

Aayush talks about companies offering consulting services that have hierarchical structures. Within HR Analytics, the employees of these companies can form clusters. Traversal can happen within these tree-like clusters.

**Amazon Experience**

At Amazon, Aayush worked on TBs of data from every country. Lets take a case study. For example, a potential customer decides to purchase an iPhone. While they can directly login to the Amazon application, the traffic can come from multiple channels like Google Ads, Facebook, and Email.

Amazons spending on each channel depends on consumer behavior through complexity analysis. They utilize machine learning algorithms to develop models that are updated weekly through retraining.

The two models built have a base on Linear Regression and Neural Networks. This project is a part of the Consumer behavior and Analytics domain.

**Microsoft Insights**

At Microsoft, employees often work on Power BI to build dashboards. They manually detect anomalies like a sharp incline or decline in the dashboard. These detections can be automated. Using a Python script, a graph is plotted on the analysis considering any two points to calculate the slope.

The objective is for the change to remain below 20% while maintaining the latency at 1 ms. This percent increase at 1 ms is acceptable since the resulting latency will only be 1.2 ms. But a similar change in graph slope at 65 million calls is quite a significant change.

**Meetings Project**

In another venture, we explored a project around meetings. Lets say the CEO of Microsoft, Satya Nadella, has many meetings on his agenda. The number of unread emails containing requests for meetings can run into hundreds!

This scenario applies to most senior executives as well. A delegator categorizes these meetings and lists them in the order of their priority. This task usually takes 1-2 weeks to perform manually.

Now, the development team takes the responsibility to reduce this time. Nobody wants to spend a week classifying emails! Data retrieval was done from APIs using their Token IDs.

The team retrieves the data package directly from calendar requests.

The team performed sorting using an algorithm that analyses these emails. The data considers emails from the past six months and analyzes their patterns.

Words: Analysis of word frequency in the subject line and body of meeting emails. For example, if the phrase "Viva Insights" appears in the subject line and is repeated across multiple meetings, the model will recognize this occurrence pattern.

Number of Participants Involved in a Meeting - For Satya Nadella to address a Town Hall meeting, the number of participants would be more than 50, and if it is a Team-based or 1-1 meeting, the number of participants could range from 1-4.

**Solution**

A Machine Learning approach, implementing the K-Means Algorithm, was developed to create a model to analyze words and the number of participants.

An alternative algorithm, developed from scratch, utilized a hit-and-trial method to determine coefficients. This algorithm performed weighted sorting of words, thereby creating clusters of meetings.

**Since the algorithmic efficiency was better, the alternative algorithm was finally patented!**

**Result**

The meeting emails were categorized and color-coded in Red, Yellow, or Green to mark them in the order of their priority. The delegator further ranks them on their priority to prepare a final list.

**Initially, the categorization used to take 1-2 weeks. The newly developed algorithm performed this task in 2 minutes!**

For Satya Nadella, Town Hall meetings take precedence over Team Meetings, while for a Media House, Town Halls are of utmost priority.

Data Structures and Algorithms enabled this solution. It's also essential to understand machine learning algorithms to construct ML models or develop an algorithm from scratch.

This learning can be programming language independent. For software development, this is of utmost importance. Data structures enhance the problem-solving skills of professionals, enabling them to excel in coding challenges.

**Bonus**

Other exciting applications of DSA include in the following areas:-

Google Flights - Similar applications that determine the shortest path for connecting flights. Here, graph theories have important applications too.

Uber/ RazorPay/ Flipkart Machine Coding Interview Round - A use-case for stack structures including Github - Version Control.

Word Applications - Undo or Redo: Use case for Stacks.

Companies that push data - Queues

Booked a Ticket for the World Cup match? - Queues FIFO structure!

While it may not be visible, DSA is implemented from the ground level. Where have you used DSA lately?

To upgrade in applying Data Structures and Algorithms, speak to HeyCoachs learning consultants for a better roadmap. Click here.

]]>ATS is an applicant tracking system that acts as an electronic gatekeeper for employers and hiring managers. Your resume gets scanned by the ATS to extract keywords which then gets matched with the job description.Some ATS also give your resume a rank that recruiters use to filter out the candidates.If ATS cannot parse your resume or gives it a low score then your application is bound to get rejected.

Nowadays, it's said 75% of large MNCs rely on ATS to filter out candidates. And for that reason, your resume must be ATS friendly.

A resume is supposed to give a gist about you and excite the reader to know more about you. For example, a resume is like a teaser of a movie. Just like if a teaser excites you, then you will watch the movie. So having a good resume makes all the difference.

For landing an interview, the first step is to get your resume noticed and the majority of the people make a mistake in crafting their resume which leads to resume rejection even if they are a good fit for the role.

Hopefully, after reading this blog post, you people won't be making those mistakes with your resume. Let's start with the tips.

Some tricks to improve the resumes to make it ATS friendly.

"One size doesn't fit all" and similarly one resume won't work for different job applications. So, customize your resume according to the profile you are applying to. For e.g. if applying to a back-end role then highlight your back-end skills more compared to the rest or else applying for a data science position then don't forget to put your data science projects on the top.

When your resume is being parsed by the ATS, it will try to find keywords that match the job description and then accordingly rate your resume. So make sure that you use all the important keywords.

Many times, people use tables to represent their education or projects which aren't ATS friendly and should be avoided.

According to some studies, resume length doesn't matter to ATS but ideally, it's recommended to have a 1-page resume since the recruiter is hardly going to skim through your resume in 10-20 secs. He/She might not even turn to the second page.

I have come across so many candidates who have shared the word or google doc version of their resume and when I try to open that resume many a time the layout gets messed up and I can't read the resume properly. So to be on the safer side, ALWAYS share a pdf version of your resume.

This is such a simple tip yet many candidates forget to check their resumes for any spelling mistakes. Whenever you make changes to your resume, make sure you run all the spellings and sentences through tools like Grammarly or at least get it checked by your friend for spelling mistakes. Having such silly mistakes leaves a bad impression and you should avoid it.

Some sites to help you improve your resume as per ATS,

- https://www.jobscan.co/
- https://resumeworded.com/resume-scanner
- https://github.com/LWandres/Clarity
- https://github.com/indranildchandra/JobDescription-Keywords-Extractor
- https://cvscan.uk/

Given two strings word1 and word2, return the minimum number of steps required to make word1 and word2 the same.

In one step, you can delete exactly one character in either string.

Example 1:

Input: word1 = "sea", word2 = "eat"Output: 2

Explanation: You need one step to make "sea" to "ea" and another step to make "eat" to "ea".

What are the subproblems in this case?

The idea is to process all characters one by one starting from either from the left or right sides of both strings. Let us traverse from the left corner, there are two possibilities for every pair of characters being traversed.

m: Length of str1 (first string)

n: Length of str2 (second string)

If the first characters of two strings are the same, nothing much to do. Ignore the first characters and get a count for the remaining strings. So we recur for lengths m-1 and n-1.Else (If first characters are not the same), we delete the first character of str1 or first character of str2, recursively compute the minimum cost for all two operations and take a minimum of two values.

Remove: Recur for m-1 and n

Remove: Recur for m and n-1

`class Solution {public: int minDistance(string word1, string word2) { if(word1.size()==0 || word2.size()==0) return max(word1.size(),word2.size()); if(word1[0]==word2[0]) return minDistance(word1.substr(1),word2.substr(1)); int op1=minDistance(word1.substr(1),word2); int op2=minDistance(word1,word2.substr(1)); return min(op1,op2)+1; }};`

Time Complexity:- (2 power m)

We can use memoization for the above solution and get rid of the TLE i.e. Top-Down DP

`class Solution {public: int minDistance_memoization(string word1, string word2, int** dp) { if(word1.size()==0 || word2.size()==0) return max(word1.size(),word2.size()); int n=word2.size(); int m=word1.size(); if(dp[m][n]!=-1) return dp[m][n]; int ans; if(word1[0]==word2[0]) ans=minDistance_memoization(word1.substr(1),word2.substr(1),dp); else{ int op1=minDistance_memoization(word1.substr(1),word2, dp); int op2=minDistance_memoization(word1,word2.substr(1),dp); ans=min(op1,op2)+1; } dp[m][n]=ans; return ans; } int minDistance(string word1, string word2) { int n=word2.size(); int m=word1.size(); int ** dp=new int* [m+1]; for(int i=0;i<=m;i++) { dp[i]=new int [n+1]; for(int j=0;j<=n;j++) dp[i][j]=-1; } return minDistance_memoization(word1,word2,dp); }};`

If we draw the recursion tree, we can find that the same problem is being solved again and againWe can see that many subproblems are solved, again and again, for example, eD(2, 2) is called three times. Since the same subproblems are called again, this problem has Overlapping Subproblems property. So this problem has both properties of a dynamic programming problem. Like other typical Dynamic Programming(DP) problems, recomputations of the same subproblems can be avoided by constructing a temporary array that stores the results of subproblems.

`class Solution {public: int minDistance(string s, string t) { int m = s.size(); int n = t.size(); int **output = new int*[m+1]; for(int i = 0; i <= m; i++) { output[i] = new int[n+1]; } // Fill 1st row for(int j = 0; j <= n; j++) { output[0][j] = j; } // Fill 1st col for(int i = 1; i <= m; i++) { output[i][0] = i; } for(int i = 1; i <= m; i++) { for(int j = 1; j <= n; j++) { if(s[m-i] == t[n-j]) { output[i][j] = output[i-1][j-1]; } else { int a = output[i-1][j]; int b = output[i][j-1]; output[i][j] = min(a,b) + 1; } } } return output[m][n]; }};`

Time Complexity: O(m x n)

Auxiliary Space: O(m x n)

]]>Given the head of a singly linked list where elements are sorted in ascending order, convert it to a height-balanced BST.

For this problem, a height-balanced binary tree is defined as a binary tree in which the depth of the two subtrees of every node never differs by more than 1.

Example:-

Input: head = [-10,-3,0,5,9]

Output: [0,-3,9,-10,null,5]

Explanation: One possible answer is [0,-3,9,-10, null,5], which represents the shown height-balanced BST.

We first find the middle node of the list and make it the root of the tree to be constructed.

Steps:-

1) Get the Middle of the linked list and make it root.

2) Recursively do the same for the left half and right half.

Get the middle of the left half and make it left child of the root created in step 1.

Get the middle of the right half and make it the right child of the root created in step 1.

`/** * Definition for singly-linked list. * struct ListNode { * int val; * ListNode *next; * ListNode() : val(0), next(nullptr) {} * ListNode(int x) : val(x), next(nullptr) {} * ListNode(int x, ListNode *next) : val(x), next(next) {} * }; *//** * Definition for a binary tree node. * struct TreeNode { * int val; * TreeNode *left; * TreeNode *right; * TreeNode() : val(0), left(nullptr), right(nullptr) {} * TreeNode(int x) : val(x), left(nullptr), right(nullptr) {} * TreeNode(int x, TreeNode *left, TreeNode *right) : val(x), left(left), right(right) {} * }; */class Solution {public: TreeNode* sortedListToBST(ListNode* head, ListNode* tail) { if(head==tail) return NULL; if(head->next==tail) { TreeNode* root=new TreeNode(head->val); return root; } ListNode* mid=head, *temp=head; while(temp!=tail && temp->next!=tail) { mid=mid->next; temp=temp->next->next; } TreeNode* root=new TreeNode(mid->val); root->left=sortedListToBST(head,mid); root->right=sortedListToBST(mid->next,tail); return root; } TreeNode* sortedListToBST(ListNode* head) { return sortedListToBST(head,NULL); }};`

]]>Given an array of non-negative integer nums, you are initially positioned at the first index of the array.

Each element in the array represents your maximum jump length at that position.

Your goal is to reach the last index in the minimum number of jumps.

You can assume that you can always reach the last index.

Example 1:

Input: nums = [2,3,1,1,4]

Output: 2

Explanation: The minimum number of jumps to reach the last index is 2. Jump 1 step from index 0 to 1, then 3 steps to the last index.

Can be thought of as a BFS Problem.Where the nodes in the ith level are all the nodes that can be reached from the i-1th level.

2

3 1

1 4

Clearly, 4 can be reached by only 2 jumps.

`class Solution {public: int jump(vector<int>& nums) { if(nums.size()<=1) return 0; int cur_max=0,i=0; int level=0; while(i<=cur_max) { int at_max=cur_max; for(;i<=cur_max;i++) { at_max=max(at_max,nums[i]+i); if(at_max>=nums.size()-1) return level+1; } cur_max=at_max; level++; } return 0; }};`

Time Complexity:- 0(N)

Space Complexity:- 0(1)

Approach 2:-We can start traveling from the back and for each index, we can check for reaching to that index how many jumps are needed.

`#define MAX 100001class Solution {public: int jump(vector<int>& nums) { vector<int> dp(nums.size(),0); for(int i=nums.size()-2;i>=0;i--) { int steps=nums[i]+i; int ans=MAX; for(int j=i+1;j<=steps&&j<nums.size();j++) { if(nums[j]==0) continue; ans=min(ans,dp[j]); } if((nums[i]+i)>=(nums.size()-1)) ans=0; dp[i]=++ans; } return dp[0]; }};`

Time Complexity:- 0(N*N)

Space Complexity:- 0(N)

]]>Given an array nums with n integers, your task is to check if it could become non-decreasing by modifying at most one element.

We define an array is non-decreasing if nums[i] <= nums[i + 1] holds for every i (0-based) such that (0 <= i <= n - 2).

Example 1:

Input: nums = [4,2,3]

Output: true

Explanation: You could modify the first 4 to 1 to get a non-decreasing array.

We are only allowed to modify once.

While modifying we had two options, either to change nums[i-1] or nums[i].

But for that, we need to check the value of nums [i-2] value also provided i-2>=0

1 5 4 4, Let i =2, Here nums[i] > nums[i-2], and since nums[i-1] is also greater than nums[i-2], we can simply modify at nums [i-1]= nums[i];

3 5 2, let i=2, Here nums[i] < nums[i-2], and since nums[i-1] must be already greater than nums[i-2], we can use the value of nums[i] = nums[i-1]

And if the changed value increases to any number greater than 1, it means we need to alter more than 1 element which is not allowed. Hence, We return false at that moment.

`class Solution {public: bool checkPossibility(vector<int>& nums) { int changed=0; for(int i=1;i<nums.size();i++) { if(nums[i] < nums[i-1]) { if(changed++) return false; (i-2 < 0 || nums[i]>=nums[i-2])? nums[i-1]=nums[i] : nums[i]=nums[i-1]; } } return true; }};`

Time Complexity:- O(N)

Space Complexity:- O(1)

]]>Given an array nums. We define a running sum of an array as runningSum[i] = sum(nums[0]nums[i]).

Return the running sum of nums.

Example:-

Input: nums = [1,2,3,4]

Output: [1,3,6,10]

Explanation: Running sum is obtained as follows: [1, 1+2, 1+2+3, 1+2+3+4].

We need to calculate the running sum. The running sum at every index can be calculated by adding the element present at that index with the running sum till the previous index

i.e.

`dp[i] =nums[i] + dp[i-1]`

Here dp contains the running sum till the current index i.e. i.

` vector<int> runningSum(vector<int>& nums) { vector<int> ans; ans.push_back(nums[0]); for(int i=1;i<nums.size();i++) { ans.push_back(nums[i]+ans.back()); } return ans; }`

Time Complexity:- O(n)

Space Complexity:- O(1)

]]>Given an array of integers nums sorted in ascending order, find the starting and ending position of a given target value.

If target is not found in the array, return [-1, -1].

Example 1:

Input: nums = [5,7,7,8,8,10], target = 8

Output: [3,4]

Example 2:

Input: nums = [5,7,7,8,8,10], target = 6

Output: [-1,-1]

Example 3:

Input: nums = [], target = 0

Output: [-1,-1]

An efficient approach to this problem is to use Binary Search to search the first and the last index.

`class Solution {public: int first_index(vector<int>& nums, int target) { int start=0,end=nums.size()-1; int ans=-1; while(start<=end) { //Normal Binary Search Logic int mid=(start+end)/2; if(nums[mid]>target) end=mid-1; else if(nums[mid]<target) start=mid+1; else { // If arr[mid] is same as target, we // update ans and move to the left half ans=mid; end=mid-1; } } return ans; } int second_index(vector<int>& nums, int target) { int start=0,end=nums.size()-1; int ans=-1; while(start<=end) { //Normal Binary Search Logic int mid=(start+end)/2; if(nums[mid]>target) end=mid-1; else if(nums[mid]<target) start=mid+1; // If arr[mid] is same as target, we // update ans and move to the right half else { ans=mid; start=mid+1; } } return ans; } vector<int> searchRange(vector<int>& nums, int target) { vector<int> ans; ans.push_back(first_index(nums,target)); ans.push_back(second_index(nums,target)); return ans; }};`

Time Complexity: O(log n)

Auxiliary Space: O(1)

]]>A robot is located at the top-left corner of a m x n grid (marked 'Start' in the diagram below).

The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid (marked 'Finish' in the diagram below).

Now consider if some obstacles are added to the grids. How many unique paths would there be?

An obstacle and space is marked as 1 and 0 respectively in the grid.

Example:-

Input: obstacleGrid = [[0,0,0],[0,1,0],[0,0,0]]

Output: 2

Explanation: There is one obstacle in the middle of the 3x3 grid above.

There are two ways to reach the bottom-right corner:

Right -> Right -> Down -> Down

Down -> Down -> Right -> Right

`if obstacleGrid[i][j] is not an obstacle obstacleGrid[i,j] = obstacleGrid[i,j - 1] else obstacleGrid[i,j] = 0`

Similar processing can be done for finding out the number of ways of reaching the cells in the first column.For any other cell, we can find out the number of ways of reaching it, by making use of the number of ways of reaching the cell directly above it and the cell to the left of it in the grid. This is because these are the only two directions from which the robot can come to the current cell.Since we need to make use of pre-computed values along the iteration, this becomes a dynamic programming problem.

`if obstacleGrid[i][j] is not an obstacle obstacleGrid[i,j] = obstacleGrid[i,j - 1] + obstacleGrid[i - 1][j]else obstacleGrid[i,j] = 0`

The robot can only move either down or right. Hence any cell in the first row can only be reached from the cell left to it. However, if any cell has an obstacle, you don't let that cell contribute to any path. So, for the first row, the number of ways will simply be

`class Solution {public: int uniquePathsWithObstacles(vector<vector<int>>& obstacleGrid) { int m=obstacleGrid.size(),n=obstacleGrid[0].size(); if(m==0 || n==0) return 0; int dp[m][n]; memset(dp,0,sizeof(dp)); for(int i=0;i<m;i++) { if(obstacleGrid[i][0]!=1) dp[i][0]=1; else break; } for(int j=0;j<n;j++) { if(obstacleGrid[0][j]!=1) dp[0][j]=1; else break; } for(int i=1;i<m;i++) { for(int j=1;j<n;j++) { if(obstacleGrid[i][j]!=1) dp[i][j]=dp[i-1][j]+dp[i][j-1]; } } return dp[m-1][n-1]; }};`

Time Complexity=O(mn)

Space Complexity=O(mn)

]]>An integer n is a power of three, if there exists an integer x such that n == 3x.

Example 1:

Input: n = 27

Output: true

Example 2:

Input: n = 0

Output: false

Example 3:

Input: n = 9

Output: true

Example 4:

Input: n = 45

Output: false

Firstly, since the input can be any negative integer also, we can directly return false as negative integers cannot be the power of three. In fact, any integer less than 1 cannot be a power of three. Now, we know that 1 is three raised to the power of 0. For any other number >1, we can keep on dividing our number by 3 and if at any step we find that the remainder is not 0, we can return false. Else, after reducing the number to 1, we return true.

`class Solution {public: bool isPowerOfThree(int n) { if(n<1) // all negative numbers and 0 can't be a power of 3 return false; if(n==1) // 1 is a power of 3 return true; while(n>1){ int remainder=n%3; // taking out remainder of n after dividing by 3 // if remainder is not 0, the number can't be a power of 3 // since it is not divisble by 3 if(remainder!=0) return false; n/=3; // dividing n by 3 if remainder is 0 } return true; }};`

Time Complexity: O(log3(n))

Space Complexity: O(1)

`class Solution {public: bool isPowerOfThree(int n) { return n>0 && 1162261467%n==0; }};`

Since for all power of 3,

9%3 == 27%3==27%9==81%27==0, this relation is followed.

1162261467 is the highest possible power of 3 that can be contained in a 32-bit integer, any number for being a power of 3 must-have 1162261467%n==0.

Time Complexity: O(1)

Space Complexity: O(1)

]]>You are given an integer array of heights representing the heights of buildings, some bricks, and some ladders.

You start your journey from building 0 and move to the next building by possibly using bricks or ladders.

While moving from building i to building i+1 (0-indexed),

If the current building's height is greater than or equal to the next building's height, you do not need a ladder or bricks.If the current building's height is less than the next building's height, you can either use one ladder or (h[i+1] - h[i]) bricks.Return the furthest building index (0-indexed) you can reach if you use the given ladders and bricks optimally.

Example 1:-

Input: heights = [4,2,7,6,9,14,12], bricks = 5, ladders = 1

Output: 4

Explanation: Starting at building 0, you can follow these steps:

Go to building 1 without using ladders nor bricks since 4 >= 2.

Go to building 2 using 5 bricks. You must use either bricks or ladders because 2 < 7.

Go to building 3 without using ladders nor bricks since 7 >= 6.

Go to building 4 using your only ladder. You must use either bricks or ladders because 6 < 9.

It is impossible to go beyond building 4 because you do not have any more bricks or ladders.

The underlying concept is greedy. We first try to use bricks and then ladders.Since ladders are more "powerful" than the bricks. When we meet a high building we first try to use the bricks. If we no longer have the required no. of bricks we have to use the ladder. We can imagine TimeTraveling to the part where we have used the maximum no. of bricks and take back those bricks and using the ladder there. Such that the gained bricks can be used the travel ahead in our path.

`class Solution {public: int furthestBuilding(vector<int>& heights, int bricks, int ladders) { priority_queue<int> pq; //Max Heap int cB = 0,lad=ladders,n = heights.size(); for(int i=1;i<n;i++) { if(heights[i]>heights[i-1]) { int diff = heights[i]-heights[i-1]; cB += diff; //Current No. Of Bricks Required pq.push(diff); if(cB>bricks) //If required no. of bricks exceeds the { //available bricks if(lad>0) { //Since we have ladder left, we can either directly //use it here or we can imagine that we can travel back //to the part where we have used the maximum no. of bricks //use the ladder there and take back those bricks such //that we can use them here. cB-=pq.top(); pq.pop(); lad--; } else { return i-1; } } } } //We reached the last building return n-1; }};`

]]>There are n servers numbered from 0 to n - 1 connected by undirected server-to-server connections forming a network where connections[i] = [ai, bi] represents a connection between servers ai and bi. Any server can reach other servers directly or indirectly through the network.

A critical connection is a connection that, if removed, will make some servers unable to reach some other server.

Return all critical connections in the network in any order.

Example 1:-Input: n = 4, connections = [[0,1],[1,2],[2,0],[1,3]]

Output: [[1,3]]

Explanation: [[3,1]] is also accepted.

This problem requires us to find critical connections in a network, removing which some server might not be able to reach some other server. The gist is we have to find bridges in an undirected graph. What we can use for that? Tarjan's Algorithm!

What is Tarjan's Algorithm?Tarjan's Algorithm helps us to find strongly connected components in a graph, articulation points and bridges in a graph. Here we keep track of parent of every node, the lowlink value for every node and the id of the node. Initially, we intialise all the parents, lowlink values and id of every node as -1. When the id of a node is -1, it means that the node is not yet visited.

par[node] => Denotes the parent of the nodeid[node] => Denotes the id of the nodelow[node] => Denotes the lowlink value of the node, i.e., lowest id number of the nodes associated with ittime => Denotes the time at which node gets visited (1st node gets visited at 0, it's neighbours get visited at 1 and so on..)

When we visit a node, we assign the lowlink value and id number to it. Since the id number is no more negative, it also shows that node is visited now. Now we traverse the adjacent nodes of the given node.

If any adjacent (u) is unvisited, we mark this node as it's (u)'s parent and perform a DFS on it. After that we minimise the given node's lowlink value with this adjacent node(u)'s lowlink value. If the id number of the given node is lesser than lowlink value of its adjacent node, we have found a bridge!

If the adjacent is visited, we check if its not the parent of given node as parent is also adjacent of node, and if it's not, we minimise the lowlink value of node with id number of the adjacent. And we are done!

`class Solution {public: vector<vector<int>> ans; // required answer vector<vector<int>> adj; // undirected graph vector<int> id; // id number vector<int> low; // lowlink value vector<int> par; // parent int time=0; // time void dfs(int node){ id[node]=low[node]=time++; // incrementing time after each node gets visited for(int &u:adj[node]) //exploring all the adjacent nodes of node { if(id[u]==-1) // checking if adjacent node is unvisited { par[u]=node; // assigning node as parent of adjacent node dfs(u); // performing dfs for adjacent node low[node]=min(low[node],low[u]); // minimising lowlink value of node if(id[node]<low[u]) // checking if we have a bridge yet ans.push_back({node,u}); } else if(u!=par[node]) // checking if adjacent is not parent of node low[node]=min(low[node],id[u]); // minimising lowlink value of node } } vector<vector<int>> criticalConnections(int n, vector<vector<int>>& connections) { adj.resize(n); par.resize(n,-1); low.resize(n,-1); id.resize(n,-1); for(int i=0;i<connections.size();i++){ // building our graph adj[connections[i][0]].push_back(connections[i][1]); adj[connections[i][1]].push_back(connections[i][0]); } for(int i=0;i<n;i++) // performing dfs on unvisited nodes { if(id[i]==-1) dfs(i); } return ans; }};`

Time Complexity: O(V+E) (No. of Vertices + No. of Edges)

]]>Give a binary string s, return the number of non-empty substrings that have the same number of 0's and 1's, and all the 0's and all the 1's in these substrings are grouped consecutively.

Substrings that occur multiple times are counted the number of times they occur.

Example 1:-

Input: s = "00110011"

Output: 6

Explanation: There are 6 substrings that have equal number of consecutive 1's and 0's: "0011", "01", "1100", "10", "0011", and "01".

Notice that some of these substrings repeat and are counted the number of times they occur.Also, "00110011" is not a valid substring because all the 0's (and 1's) are not grouped together.

We take two variables prev and the next. prev is responsible for keeping track of contiguous same characters whether '0 or '1' until the character changes from '0' to '1' or '1' to '0'. next is responsible for keeping track of contiguous same characters after variable changes whether it is '0' or '1'. Whenever character changes, we interchange prev and next and set next to 1 as we will be counting the number of contiguous changed characters now. If prev is greater than next at any point of time, we can form contiguous substrings, therefore, we increment the count.

`class Solution {public: int countBinarySubstrings(string s) { int prev=0; int next=1; int count=0; int n=s.size(); for (int i=1;i<n;i++){ if (s[i]==s[i-1]) next++; else{ prev=next; next=1; } if(prev>=next) count++; } return count; }};`

Time Complexity: O(n)

Space Complexity: O(1)

]]>There is a rectangular brick wall in front of you with n rows of bricks. The ith row has some number of bricks each of the same height (i.e., one unit) but they can be of different widths. The total width of each row is the same.

Draw a vertical line from the top to the bottom and cross the least bricks. If your line goes through the edge of a brick, then the brick is not considered as crossed. You cannot draw a line just along one of the two vertical edges of the wall, in which case the line will obviously cross no bricks.

Given the 2D array wall that contains the information about the wall, return the minimum number of crossed bricks after drawing such a vertical line.

Input: wall = [[1,2,2,1],[3,1,2],[1,3,2],[2,4],[3,1,2],[1,3,1,1]]

Output: 2

We are going to keep track of the frequency of each edge size. We are concerned with the edge which occurs in most of the rows, as that would pass through minimum bricks. So we keep track of the frequency of all edges in a row and keep updating the maximum frequency. At last, we have to find out the number of crossed bricks which will be equal to the total no. of rows - maximum frequency.

Solution:-

`class Solution {public: int leastBricks(vector<vector<int>>& wall) { unordered_map<int, int> frequency; int max_frequency=0; for(int i=0;i<wall.size();i++){ int edge_size=0; for(int j=0;j<wall[i].size()-1;j++){ edge_size+=wall[i][j]; frequency[edge_size]++; max_frequency=max(max_frequency,frequency[edge_size]); } } return wall.size()-max_frequency; }};`

Time Complexity: O(n*m)

Space Complexity: O(m)

]]>Given a triangle array, return the minimum path sum from top to bottom.

For each step, you may move to an adjacent number of the row below. More formally, if you are on index i on the current row, you may move to either index i or index i + 1 on the next row.

Example 1:-

Input: triangle = [[2],[3,4],[6,5,7],[4,1,8,3]]

Output: 11

Explanation: The triangle looks like:

*2*

*3* 4

6 *5* 7

4 *1* 8 3

The minimum path sum from top to bottom is 2 + 3 + 5 + 1 = 11 (underlined above).

At every step, we will have two possible options either to take the jth index in the row or the jth+1 index and recursively call on the function. Memoizing it to save identical calls.

`class Solution {public: int minimum(vector<vector<int>>& triangle, int i, int j, int **dp) { int n=triangle.size(); if(i==n ) return 0; if(dp[i][j]!=-1) return dp[i][j]; int op1=triangle[i][j] + minimum(triangle, i+1,j, dp); int op2=triangle[i][j+1] + minimum(triangle, i+1,j+1, dp); dp[i][j]=min(op1, op2); return dp[i][j]; } int minimumTotal(vector<vector<int>>& triangle) { int n=triangle.size(); int** dp=new int *[n]; for(int i=0;i<n;i++) { dp[i]=new int [i+1]; for(int j=0;j<i+1;j++) dp[i][j]=-1; } return minimum(triangle, 1, 0, dp) + triangle[0][0]; }};`

]]>Given the root of an n-ary tree, return the preorder traversal of its nodes' values.

N ary-Tree input serialization is represented in their level order traversal. Each group of children is separated by the null value (See examples)

Example 1:-Input: root = [1,null,3,2,4,null,5,6]

Output: [1,3,5,6,2,4]

In a Binary Tree, we first print the root data then we go for the left child followed by the right child;But in an n-ary tree instead of having two children, we have n children so we need to iterate over all the children preceded by the roots value.

`class Solution {public: void preorder(Node* root, vector<int>& res) { if(root==NULL) return; res.push_back(root->val); for(int i=0;i<root->children.size();i++) preorder(root->children[i],res); } vector<int> preorder(Node* root) { vector<int> res; preorder(root,res); return res; }};`

]]>Given an array of distinct integers nums and a target integer target, return the number of possible combinations that add up to the target.

The answer is guaranteed to fit in a 32-bit integer.

Example 1:-

Input: nums = [1,2,3], target = 4

Output: 7

Explanation:

The possible combination ways are:

(1, 1, 1, 1)

(1, 1, 2)

(1, 2, 1)

(1, 3)

(2, 1, 1)

(2, 2)

(3, 1)

Note that different sequences are counted as different combinations.

The question is very similar to the famous staircase question.

The only difference is that in the staircase question we are given the no. of possible steps beforehand only and we can use them directly, but in this question, all possible steps or numbers are given in a nums vector. We can here use the nums vector to call for all possible steps as shown in the solution below.Normal recursive implementation, as usual, gives TLE, it can be optimized by adding a temporary array and saving the values in it 0R by using Memoization.

`class Solution {public: int combinationSum4(vector<int>& nums, int target, int dp[]) { if(target <0) return 0; if(target==0) return 1; if(dp[target]!=-1) return dp[target]; int ans=0; for(int i=0;i<nums.size();i++) { ans+=combinationSum4(nums,target-nums[i], dp); } dp[target]=ans; return ans; } int combinationSum4(vector<int>& nums, int target) { int dp[1001]; for(int i=0;i<1001;i++) dp[i]=-1; return combinationSum4(nums,target,dp); }};`

]]>You are given an n x n 2D matrix representing an image, rotate the image by 90 degrees (clockwise).

You have to rotate the image in place, which means you have to modify the input 2D matrix directly. DO NOT allocate another 2D matrix and do the rotation.

Input: matrix = [[1,2,3],[4,5,6],[7,8,9]]

Output: [[7,4,1],[8,5,2],[9,6,3]]

`class Solution {public: void rotate(vector<vector<int>>& ar) { int n=ar.size(); reverse(ar.begin(),ar.end()); //Reversing the matrix from top to bottom for(int i=0;i<n;i++) { for(int j=i+1;j<n;j++) //!NOTE j=i+1 swap(ar[i][j],ar[j][i]); //Swapping the elements along the symmetry } }};`

Space Complexity:- O(1)

Time Complexity:- O(N*N)

We create a dummy node which points to the head of linked list. It is used for help in corner cases like when there is only one node in the list. Now we can use two pointers, fast and slow. We point both the pointers to dummy node. We can move the fast pointer by n nodes first and then move fast and slow pointers together, thereby maintaining the constant gap of n nodes between them. We move the pointers until the fast pointer reaches last node. Now, the slow pointer will be pointing at nth node from the last, since they have a gap of n nodes between them. We will then link the slow pointers next to the next of its next node, thus deleting the middle node (required node).

**Solution:**

`class Solution {public: ListNode* removeNthFromEnd(ListNode* head, int n) { ListNode* dummy=new ListNode(); dummy->next=head; ListNode* slow=dummy; ListNode* fast=dummy; for(int i=0;i<n;i++) fast=fast->next; while(fast->next!=NULL){ fast=fast->next; slow=slow->next; } slow->next=slow->next->next; return dummy->next; }}`

**Time Complexity:** O(n)

**Space Complexity:** O(1)