Solved it beforoe here
I convert nums to a set for O(1) lookup. Then I repeatedly check if original is in the set; if it is, I double it. If not, I stop and return the final value.
class Solution:
def findFinalValue(self, nums: List[int], original: int) -> int:I scan the array from left to right. • If I see a 1, it must start a two-bit character, so I skip the next index (i += 2). • If I see a 0, it is a one-bit character, so I move one step (i += 1). At the end, if the pointer stops exactly at the last index, then the last character is a one-bit character.
I track the index of the previous 1. Each time I see a new 1, I check the distance to the previous one. If the gap is smaller than k, I return False. If all 1s satisfy the distance constraint, I return True
class Solution:I scan the string once and accumulate lengths of consecutive '1' runs. For each run of length L the number of substrings containing only '1' inside that run is L*(L+1)//2. I sum those values while taking modulo 10**9 + 7 to avoid overflow, and return the result.
I solve this by using the key observation that a substring with k zeros can only be dominant if k is at most √n, since the condition requires ones ≥ k² and a substring cannot have more than n ones. So I first collect the positions of all zeros, then count all-ones substrings directly (they always satisfy the condition). For each possible zero count k from 1 to √n, I slide a window of k consecutive zeros over the string. For each such window, I compute how far the substring can extend left and right using the ones around the zero segment, and I also count how many ones lie between the first and last zero in that window. This lets me determine exactly which substrings include exactly those k zeros, and how many extra ones they need to meet the condition ones ≥ k². I then count in O(1) how many left/right extensions satisfy this requirement by subtracting the f
I use a 2D difference array (Imos) to apply each submatrix +1 in O(1) time per query, then recover the final matrix with 2D prefix sums. Concretely, I maintain an (n+1)×(n+1) diff array and for each query (r1,c1,r2,c2) I do:
- diff[r1][c1] += 1
- diff[r1][c2+1] -= 1
- diff[r2+1][c1] -= 1
- diff[r2+1][c2+1] += 1
I first check if any element is already 1. If so, each other element can be turned into 1 with one operation by repeatedly using adjacent 1s, so the minimum operations equals the number of elements that are not 1. If no 1 exists, I try to create a 1 by taking gcds inside a subarray. For a subarray of length L, it takes L-1 operations to reduce that subarray to a single 1. After creating that 1, we still need n-1 operations to turn the remaining n-1 elements into 1. So if the smallest subarray whose gcd is 1 has length L, the total minimum operations is (L-1) + (n-1). If no subarray has gcd 1 then it is impossible and we return -1.
class Solution: