How to remove node_modules in git

You may accidentally add the node_modules or you forgot to add the node_modules directory to .gitignore list, no worry, you may remove it after you pushed to the git.

First, add the node_modules in to the .gitignore

vim .gitignore

After added the /node_modules and save it.

git rm -r --cached .
git add .
git commit -m "remove gitignore files"
git push origin master

How to rebuild /etc/yum.repos.d in CentOS


If you remove all the files in /etc/yum.repos.d you may see this error. You can restore using the way below.

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
There are no enabled repos.
 Run "yum repolist all" to see the repos you have.
 To enable Red Hat Subscription Management repositories:
     subscription-manager repos --enable <repo>
 To enable custom repositories:
     yum-config-manager --enable <repo>
[22:09][~]# cat /etc/redhat-release
 CentOS Linux release 7.3.1611 (Core)

Check your version and reinstall using this way.

[22:09][~]# yum reinstall http://vault.centos.org/7.3.1611/os/x86_64/Packages/centos-release-7-3.1611.el7.centos.x86_64.rpm


Publishing Failed in WordPress

When you post in WordPress with over 5MB text post, you may receive this error with red color.

Publishing Failed

or you found in your php error log

PHP Fatal error:  Maximum execution time of 60 seconds exceeded in /home/nginx/domains/juzhax.com/public/wp-includes/formatting.php on line 2295

This is because WordPress need more time to execute and process and formatting your post over the maximum execution time.

You may use the way below to solve this.

Edit the file wp-config.php and add in the line


@ini_set('max_execution_time', 1800);

But, use this with caution, because if many of your script in WordPress over this time, this way maybe kill your server.

File ./vendor/autoload.php missing or not readable. in centminmod phpMyAdmin

After installed the phpMyAdmin in centminmod, I saw this error message.

File ./vendor/autoload.php missing or not readable.
Most likely you did not run Composer to install library files.

I like to fix it manually and access to my server shell and run these command.

[code lang=”shell”]
cd /usr/local/nginx/html/1111_mysqladmin12345
git pull
rm -rf composer.phar
wget -cnv https://getcomposer.org/composer.phar -O composer.phar
php composer.phar update –no-dev
[/code]

To make sure every time update phpmyadmin and update the composer too, you should modify the file

/root/tools/phpmyadmin_update.sh

and add in these lines.

[code lang=”shell”]
git pull
rm -rf composer.phar
wget -cnv https://getcomposer.org/composer.phar -O composer.phar
php composer.phar update –no-dev
[/code]

Install Tinyproxy on Centos 7

Tinyproxy is a light-weight HTTP/HTTPS proxy daemon for POSIX operating systems.
Designed from the ground up to be fast and yet small, it is an ideal solution for use cases such as embedded deployments where a full featured HTTP proxy is required, but the system resources for a larger proxy are unavailable.

[code lang=”shell”]
yum install -y epel-release
yum update -y
yum -y install tinyproxy
yum install vim -y
[/code]

vim /etc/tinyproxy/tinyproxy.conf

Search for
[code lang=”shell”]
Port 8888
[/code]

Then
Search for:
[code lang=”shell”]
Allow xxx.xxx.xxx.xxx
[/code]
If you want to let it connect from anywhere then just comment it, but I’m not recommended, because it will allow other user connect in.

To test from the Allowed server to the TinyProxy Server

ssh [email protected] -L 1234:localhost:8888 -N

[code lang=”shell”]
curl -I https://juzhax.com/ –proxy [email protected]:8888
[/code]

mongoexport and mongoimport with query from one host to one host

I would like to query the data that I need only from a server to another server, and I just use one line of command in linux shell

[code lang=”shell”]
mongoexport -h fromHost.com -d fromDB -c fromCollection -q ‘{ count: { $gte: 1 } }’ | mongoimport -h toHost.com -d toNewDB -c toNewCollection
[/code]

If your date is many GB, you can run it in background using nohup
[code lang=”shell”]
nohup "mongoexport -h fromHost.com -d fromDB -c fromCollection -q ‘{ count: { \$gte: 1 } }’ | mongoimport -h toHost.com -d toNewDB -c toNewCollection" &
[/code]

If you want to view the current process
[code lang=”shell”]
tail nohup.out -f
[/code]
It will output something like
[code lang=”shell”]
2016-05-17T02:34:47.822+0700 imported 1431218 documents
2016-05-17T02:36:40.240+0700 connected to: localhost
2016-05-17T02:36:40.243+0700 connected to: db.fromHost.com
2016-05-17T02:36:41.244+0700 db.collection 1000
2016-05-17T02:36:42.243+0700 db.collection 56000
2016-05-17T02:36:43.239+0700 db.collection0517 11.5 MB
2016-05-17T02:36:43.243+0700 db.collection 88000
2016-05-17T02:36:44.244+0700 db.collection 128000
2016-05-17T02:36:45.243+0700 db.collection 160000
2016-05-17T02:36:46.239+0700 db.collection0517 24.4 MB
…..
…..
[/code]

Fast way to find duplicate data in MongoDB

I need to find out the duplicate data content in my 40 Millions records, then I can make the unique index to my name field.

[code lang=”shell”]
> db.collecton.aggregate([
… { $group : {_id : "$field_name", total : { $sum : 1 } } },
… { $match : { total : { $gte : 2 } } },
… { $sort : {total : -1} },
… { $limit : 5 }],
… { allowDiskUse: true}
… );

{ "_id" : "data001", "total" : 2 }
{ "_id" : "data004231", "total" : 2 }
{ "_id" : "data00751", "total" : 2 }
{ "_id" : "data0021", "total" : 2 }
{ "_id" : "data001543", "total" : 2 }
>
[/code]

{ allowDiskUse: true} is optional if your data is not huge.

{ $limit : 5 }, you can set display more data.

ERROR: failed to ptrace(PEEKDATA) pid 17402: Input/output error (5)

You may found this error from your php-fpm and the php-fpm crash

[code lang=”shell”]
tail /var/log/php-fpm/error.log
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 17402: Input/output error (5)
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 17777: Input/output error (5)
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 18886: Input/output error (5)
[15-May-2016 12:25:53] ERROR: failed to ptrace(PEEKDATA) pid 17232: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 12091: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 16704: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 17779: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 19015: Input/output error (5)
[15-May-2016 12:30:53] ERROR: failed to ptrace(PEEKDATA) pid 20663: Input/output error (5)
[15-May-2016 12:30:53] ERROR: failed to ptrace(PEEKDATA) pid 21002: Input/output error (5)
[/code]

Solution to stop ERROR: failed to ptrace(PEEKDATA)

You can just comment out the php-fpm config
[code lang=”shell”]
vim /etc/php-fpm.d/www.conf
[/code]

then comment out
[code lang=”shell”]
;slowlog = /var/log/php-fpm/slow.log
;request_slowlog_timeout = 5s
[/code]

How to clear nginx cache manually ?

Normally I like to do manually job.

assume that the default path of your nginx cache is /var/cache/nginx

[code lang=”shell”]
find /var/cache/nginx -type f -delete
[/code]

This will clear all your cache in one line.

Delete a single file from nginx cache

If you just want to delete a single file from nginx cache, you can try this.

[code lang=”shell”]
grep -lr ‘//juzhax.com/wp-content/plugins/jetpack/modules/wpgroho.js’ /var/cache/nginx*
[/code]
Then it will show something like this
/var/cache/nginx/8/45/6025f6b505cd8cbc1172d4e541ac3458

You can safely remove by using rm
[code lang=”shell”]
rm /var/cache/nginx/8/45/6025f6b505cd8cbc1172d4e541ac3458
[/code]

How to resize OVH Public Cloud disk in CentOS 7 linux

I’m going to resize 100G SSD to 150GB SSD in Public Cloud

First I check the current mounted disk
[code lang=”shell”]
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 2.4G 36G 7% /
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 17M 3.8G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 780M 0 780M 0% /run/user/1000
/dev/vdb 99G 84G 9.7G 90% /mnt/data
[/code]

Then I’m going to umount it
[code lang=”shell”]
[[email protected] ~]# umount /dev/vdb
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 2.4G 36G 7% /
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 17M 3.8G 1% /run
[/code]

I use the OVH interface to Backup first, then Detach the disk, and resize it to 150GB then Save.
And now back to the shell.

[code lang=”shell”]
[[email protected] ~]# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000161a3

Device Boot Start End Blocks Id System
/dev/vda1 * 2048 83883491 41940722 83 Linux

Disk /dev/vdb: 161.1 GB, 161061273600 bytes, 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[/code]
Use e2fsck to chekc the file systems.

[code lang=”shell”]
[[email protected] ~]# e2fsck /dev/vdb
e2fsck 1.42.9 (28-Dec-2013)
/dev/vdb: clean, 62/6553600 files, 22382398/26214144 blocks

[[email protected] ~]# resize2fs /dev/vdb
resize2fs 1.42.9 (28-Dec-2013)
Please run ‘e2fsck -f /dev/vdb’ first.

[[email protected] ~]# e2fsck -f /dev/vdb
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vdb: 62/6553600 files (56.5% non-contiguous), 22382398/26214144 blocks
[/code]

Resize using resize2fs

[code lang=”shell”]
[[email protected] ~]# resize2fs /dev/vdb
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/vdb to 39321600 (4k) blocks.
The filesystem on /dev/vdb is now 39321600 blocks long.
[/code]

Mount again to our system.

[code lang=”shell”]
[[email protected] ~]# mount /dev/vdb /mnt/data
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 2.4G 36G 7% /
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 17M 3.8G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 780M 0 780M 0% /run/user/1000
/dev/vdb 148G 84G 57G 60% /mnt/data
[/code]

You can see the Size increased.
Always backup before you do any change to disk.
Please take your own risk on these command in Linux.