Fast way to find duplicate data in MongoDB

I need to find out the duplicate data content in my 40 Millions records, then I can make the unique index to my name field.

[code lang=”shell”]
> db.collecton.aggregate([
… { $group : {_id : "$field_name", total : { $sum : 1 } } },
… { $match : { total : { $gte : 2 } } },
… { $sort : {total : -1} },
… { $limit : 5 }],
… { allowDiskUse: true}
… );

{ "_id" : "data001", "total" : 2 }
{ "_id" : "data004231", "total" : 2 }
{ "_id" : "data00751", "total" : 2 }
{ "_id" : "data0021", "total" : 2 }
{ "_id" : "data001543", "total" : 2 }
>
[/code]

{ allowDiskUse: true} is optional if your data is not huge.

{ $limit : 5 }, you can set display more data.

Tags:

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.