Skip to content

Commit 0fdf01e

Browse files
committed
217: partial, quicksort intro
1 parent 71a43f1 commit 0fdf01e

File tree

1 file changed

+15
-1
lines changed

1 file changed

+15
-1
lines changed

README.md

+15-1
Original file line numberDiff line numberDiff line change
@@ -195,4 +195,18 @@ def removeElement(self, nums, val):
195195

196196
We can't really do a better job. We are not taking any new memory, and the time complexity is just $O(n)$, as we only iterate once.
197197

198-
####
198+
#### 217. Contains Duplicate
199+
200+
The brute force approach is to iterate with two for loops, and just compare all the numbers with each other. But we already know this is not a good solution. We also know it's time complexity thanks to the problem 53., we have nested for loops, so it's $O(n^2)$.
201+
202+
```python
203+
def containsDuplicate(self, nums):
204+
for num1 in nums:
205+
for num2 in nums:
206+
if num1 == num2:
207+
return True
208+
return False
209+
```
210+
211+
So what can we do to make it better? One approach is to first sort the array (ie. with quicksort with time complexity $n\log n$) and then iterating just once, comparing subsequent elements. The total time complexity for this would be $O(n\log n + n)=O(n\log n)$ Remember? We talked in 53. that only the dominant terms matter in big O notation, that's why we only got the $n\log n$ part in final time complexity.
212+

0 commit comments

Comments
 (0)