Last active
December 5, 2024 22:00
-
-
Save dideler/4688053 to your computer and use it in GitHub Desktop.
Removing duplicate lines from a file in Python
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/python | |
""" | |
Playing around with slightly various ways to simulate uniq in Python. | |
The different strategies are timed. | |
Only m1() and m2() do not change the order of the data. | |
`in` is the input file, `out*` are output files. | |
""" | |
infile = 'in' # Change filename to suit your needs. | |
def m1(): | |
s = set() | |
with open('out1', 'w') as out: | |
for line in open(infile): | |
if line not in s: | |
out.write(line) | |
s.add(line) | |
def m2(): | |
s = set() | |
out = open('out2', 'w') | |
for line in open(infile): | |
if line not in s: | |
out.write(line) | |
s.add(line) | |
out.close() | |
def m3(): | |
s = set() | |
for line in open(infile): | |
s.add(line) | |
out = open('out3', 'w') | |
for line in s: | |
out.write(line) | |
out.close() | |
def m4(): | |
s = set() | |
for line in open(infile): | |
s.add(line) | |
out = open('out4', 'w').writelines(s) | |
def m5(): | |
uniqlines = set(open(infile).readlines()) | |
out = open('out5', 'w').writelines(uniqlines) | |
if __name__ == '__main__': | |
import timeit | |
print 'm1', timeit.timeit('m1()', setup='from __main__ import m1', number=1000) | |
print 'm2', timeit.timeit('m2()', setup='from __main__ import m2', number=1000) | |
print 'm3', timeit.timeit('m3()', setup='from __main__ import m3', number=1000) | |
print 'm4', timeit.timeit('m4()', setup='from __main__ import m4', number=1000) | |
print 'm5', timeit.timeit('m5()', setup='from __main__ import m5', number=1000) |
thanks
Awesome, thank you!
@jseldess
m1
and m2
do not change the order ;) m0
is just a typo.
To make the script compatible with python 3.6 modify bottom codes
if __name__ == '__main__':
import timeit
print ('m1: '+ str(timeit.timeit('m1()', setup='from __main__ import m1', number=1000)))
print ('m2: '+ str(timeit.timeit('m2()', setup='from __main__ import m2', number=1000)))
print ('m3: '+ str(timeit.timeit('m3()', setup='from __main__ import m3', number=1000)))
print ('m4: '+ str(timeit.timeit('m4()', setup='from __main__ import m4', number=1000)))
print ('m5: '+ str(timeit.timeit('m5()', setup='from __main__ import m5', number=1000)))
Result:
m1: 18.488440779925337
m2: 18.840775916894255
m3: 17.286346907121
m4: 17.045031414935742
m5: 18.767793934837414
For a 75 Kb file
I am looking for single method and found 5 to deal with duplicate rows.......Thanks a lot..
Thanks, m5 was very useful to me
uniq
only removes consecutive repetitions. It doesn't keep track of all processed lines, as you do in all of your implementations. (each one uses set()
) Instead, uniq
only remembers the last line, and removes multiple occurrences of it.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is great. I've been looking for the best way to do this. I ended up with the equivalent of your
m1
, but it's helpful to see other approaches. In your comments, you mention one other function that doesn't change order,m0
. I don't see that. Can you share?