mongoexport and mongoimport with query from one host to one host

I would like to query the data that I need only from a server to another server, and I just use one line of command in linux shell

mongoexport -h fromHost.com -d fromDB -c fromCollection -q '{ count: { $gte: 1 } }' | mongoimport -h toHost.com -d toNewDB -c toNewCollection

If your date is many GB, you can run it in background using nohup

nohup "mongoexport -h fromHost.com -d fromDB -c fromCollection -q '{ count: { \$gte: 1 } }' | mongoimport -h toHost.com -d toNewDB -c toNewCollection" &

If you want to view the current process

tail nohup.out -f

It will output something like

2016-05-17T02:34:47.822+0700	imported 1431218 documents
2016-05-17T02:36:40.240+0700	connected to: localhost
2016-05-17T02:36:40.243+0700	connected to: db.fromHost.com
2016-05-17T02:36:41.244+0700	db.collection 1000
2016-05-17T02:36:42.243+0700	db.collection  56000
2016-05-17T02:36:43.239+0700	db.collection0517	11.5 MB
2016-05-17T02:36:43.243+0700	db.collection  88000
2016-05-17T02:36:44.244+0700	db.collection  128000
2016-05-17T02:36:45.243+0700	db.collection  160000
2016-05-17T02:36:46.239+0700	db.collection0517	24.4 MB
.....
.....

Fast way to find duplicate data in MongoDB

I need to find out the duplicate data content in my 40 Millions records, then I can make the unique index to my name field.

> db.collecton.aggregate([
...     { $group : {_id : "$field_name", total : { $sum : 1 } } },
...     { $match : { total : { $gte : 2 } } },
...     { $sort : {total : -1} },
...     { $limit : 5 }],
... { allowDiskUse: true}    
...     );

{ "_id" : "data001", "total" : 2 }
{ "_id" : "data004231", "total" : 2 }
{ "_id" : "data00751", "total" : 2 }
{ "_id" : "data0021", "total" : 2 }
{ "_id" : "data001543", "total" : 2 }
> 

{ allowDiskUse: true} is optional if your data is not huge.

{ $limit : 5 }, you can set display more data.

ERROR: failed to ptrace(PEEKDATA) pid 17402: Input/output error (5)

You may found this error from your php-fpm and the php-fpm crash

tail /var/log/php-fpm/error.log
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 17402: Input/output error (5)
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 17777: Input/output error (5)
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 18886: Input/output error (5)
[15-May-2016 12:25:53] ERROR: failed to ptrace(PEEKDATA) pid 17232: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 12091: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 16704: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 17779: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 19015: Input/output error (5)
[15-May-2016 12:30:53] ERROR: failed to ptrace(PEEKDATA) pid 20663: Input/output error (5)
[15-May-2016 12:30:53] ERROR: failed to ptrace(PEEKDATA) pid 21002: Input/output error (5)

Solution to stop ERROR: failed to ptrace(PEEKDATA)

You can just comment out the php-fpm config

vim /etc/php-fpm.d/www.conf

then comment out

;slowlog = /var/log/php-fpm/slow.log
;request_slowlog_timeout = 5s

How to clear nginx cache manually ?

Normally I like to do manually job.

assume that the default path of your nginx cache is /var/cache/nginx

find /var/cache/nginx -type f -delete

This will clear all your cache in one line.

Delete a single file from nginx cache

If you just want to delete a single file from nginx cache, you can try this.

grep -lr '//juzhax.com/wp-content/plugins/jetpack/modules/wpgroho.js' /var/cache/nginx*

Then it will show something like this
/var/cache/nginx/8/45/6025f6b505cd8cbc1172d4e541ac3458

You can safely remove by using rm

rm /var/cache/nginx/8/45/6025f6b505cd8cbc1172d4e541ac3458

How to resize OVH Public Cloud disk in CentOS 7 linux

I’m going to resize 100G SSD to 150GB SSD in Public Cloud

First I check the current mounted disk

[[email protected] ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G  2.4G   36G   7% /
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G   17M  3.8G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           780M     0  780M   0% /run/user/1000
/dev/vdb         99G   84G  9.7G  90% /mnt/data

Then I’m going to umount it

[[email protected] ~]# umount /dev/vdb
[[email protected] ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G  2.4G   36G   7% /
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G   17M  3.8G   1% /run

I use the OVH interface to Backup first, then Detach the disk, and resize it to 150GB then Save.
And now back to the shell.

[[email protected] ~]# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000161a3

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    83883491    41940722   83  Linux

Disk /dev/vdb: 161.1 GB, 161061273600 bytes, 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Use e2fsck to chekc the file systems.

[[email protected] ~]# e2fsck /dev/vdb
e2fsck 1.42.9 (28-Dec-2013)
/dev/vdb: clean, 62/6553600 files, 22382398/26214144 blocks

[[email protected] ~]# resize2fs /dev/vdb
resize2fs 1.42.9 (28-Dec-2013)
Please run 'e2fsck -f /dev/vdb' first.

[[email protected] ~]# e2fsck -f /dev/vdb
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vdb: 62/6553600 files (56.5% non-contiguous), 22382398/26214144 blocks

Resize using resize2fs

[[email protected] ~]# resize2fs /dev/vdb
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/vdb to 39321600 (4k) blocks.
The filesystem on /dev/vdb is now 39321600 blocks long.

Mount again to our system.

[[email protected] ~]# mount /dev/vdb /mnt/data
[[email protected] ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G  2.4G   36G   7% /
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G   17M  3.8G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           780M     0  780M   0% /run/user/1000
/dev/vdb        148G   84G   57G  60% /mnt/data

You can see the Size increased.
Always backup before you do any change to disk.
Please take your own risk on these command in Linux.

lazy-result.js ReferenceError: Promise is not defined

node_modules/postcss/lib/lazy-result.js:157
        this.processing = new Promise(function (resolve, reject) {
                              ^
ReferenceError: Promise is not defined

When I install the FoundationPress, after I run gulp build I saw this message,
this is because the node version problem.

the have mention in this post
https://github.com/postcss/postcss-nested/issues/30

Solution

vim node_modules/postcss/lib/lazy-result.js

Put this to the first line of the file lazy-result.js

require('es6-promise').polyfill();

Save.

Then install

npm install es6-promise

Then build again

[[email protected]]# gulp build
[18:14:07] Starting 'clean'...
[18:14:07] Starting 'clean:javascript'...
[18:14:07] Starting 'clean:css'...
[18:14:07] Finished 'clean:javascript' after 4.56 ms
[18:14:07] Finished 'clean:css' after 2.98 ms
[18:14:07] Finished 'clean' after 6.38 ms
[18:14:07] Starting 'build'...
[18:14:07] Starting 'copy'...
[18:14:07] Finished 'copy' after 103 ms
[18:14:07] Starting 'sass'...
[18:14:08] Starting 'javascript'...
[18:14:08] Starting 'lint'...
[18:14:10] Finished 'lint' after 1.53 s
[18:14:10] Finished 'sass' after 2.58 s
[18:14:14] Finished 'javascript' after 6.12 s
[18:14:14] Finished 'build' after 6.78 s

Success !

How to Install MongoDB 3.2 on CentOS 7

vim /etc/yum.repos.d/mongodb.repo

Paste this to the file and save using :wq

[MongoDB]
name=MongoDB Repository
baseurl=http://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.2/x86_64/
gpgcheck=0
enabled=1

Download and install mongodb using yum

yum install mongodb-org -y

Start mongod and configure auto start while system boot

/etc/init.d/mongod restart
chkconfig mongod on

Check all the versions

[[email protected] ~]# mongo --version
MongoDB shell version: 3.2.3
[[email protected] ~]# mongod --version
db version v3.2.3
git version: b326ba837cf6f49d65c2f85e1b70f6f31ece7937
OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
allocator: tcmalloc
modules: none
build environment:
    distmod: rhel70
    distarch: x86_64
    target_arch: x86_64

Test the connection

[[email protected] ~]# mongo
MongoDB shell version: 3.2.3
> use test
switched to db test
> db.test.save( { juzhax: 1 } )
WriteResult({ "nInserted" : 1 })
> db.test.find()
{ "_id" : ObjectId("56d4ac48b376b143e4749229"), "juzhax" : 1 }

WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.

After I install MongoDB 3.2.3 in Centos 7, I received this error when I start mongo in shell.

[[email protected] ~]# mongo
MongoDB shell version: 3.2.3
connecting to: test
Server has startup warnings:
2016-02-29T14:11:49.308-0500 I CONTROL  [initandlisten]
2016-02-29T14:11:49.308-0500 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-02-29T14:11:49.308-0500 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-02-29T14:11:49.308-0500 I CONTROL  [initandlisten]
2016-02-29T14:11:49.308-0500 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-02-29T14:11:49.308-0500 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-02-29T14:11:49.308-0500 I CONTROL  [initandlisten]
2016-02-29T14:11:49.308-0500 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 4096 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.

Solution

Create the init.d script.
Create the following file at /etc/init.d/disable-transparent-hugepages:

#!/bin/sh
### BEGIN INIT INFO
# Provides:          disable-transparent-hugepages
# Required-Start:    $local_fs
# Required-Stop:
# X-Start-Before:    mongod mongodb-mms-automation-agent
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Disable Linux transparent huge pages
# Description:       Disable Linux transparent huge pages, to improve
#                    database performance.
### END INIT INFO

case $1 in
  start)
    if [ -d /sys/kernel/mm/transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/transparent_hugepage
    elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/redhat_transparent_hugepage
    else
      return 0
    fi

    echo 'never' > ${thp_path}/enabled
    echo 'never' > ${thp_path}/defrag

    unset thp_path
    ;;
esac

Make it executable.
Run the following command to ensure that the init script can be used:

sudo chmod 755 /etc/init.d/disable-transparent-hugepages
sudo chkconfig --add disable-transparent-hugepages

WARNING: Cannot detect if NUMA interleaving is enabled. Failed to probe “/sys/devices/system/node/node1”: Permission denied

[[email protected] ~]# mongo
MongoDB shell version: 3.2.3
connecting to: test
Server has startup warnings:
2016-02-29T23:11:36.666+0700 I CONTROL  [initandlisten]
2016-02-29T23:11:36.667+0700 I CONTROL  [initandlisten] ** WARNING: Cannot detect if NUMA interleaving is enabled. Failed to probe "/sys/devices/system/node/node1": Permission denied
2016-02-29T23:11:36.667+0700 W CONTROL  [initandlisten]
2016-02-29T23:11:36.667+0700 W CONTROL  [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
2016-02-29T23:11:36.667+0700 W CONTROL  [initandlisten]
2016-02-29T23:11:36.667+0700 W CONTROL  [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
2016-02-29T23:11:36.667+0700 I CONTROL  [initandlisten]
2016-02-29T23:11:36.667+0700 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 4096 processes, 262144 files. Number of processes should be at least 131072 : 0.5 times number of files.

Solution

I’m using the OVH kernel, so it is impossible to use with MongoDB, to solve this issue I have to install back the original kernel of the linux, then this error will be gone.

WARNING: soft rlimits too low. rlimits set to 4096 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.

I’ve received this error while starting mongo in shell while installing on

[[email protected] ~]# mongod --version
db version v3.2.3
git version: b326ba837cf6f49d65c2f85e1b70f6f31ece7937
OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
allocator: tcmalloc
modules: none
build environment:
    distmod: rhel70
    distarch: x86_64
    target_arch: x86_64
[[email protected] ~]# mongo --version
MongoDB shell version: 3.2.3
CentOS Linux release 7.2.1511 (Core)
WARNING: soft rlimits too low. rlimits set to 4096 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.

Solution

vim /etc/security/limits.d/90-nproc.conf

Then put in

mongod     soft    nproc     64000

and

reboot