How to rebuild /etc/yum.repos.d in CentOS


If you remove all the files in /etc/yum.repos.d you may see this error. You can restore using the way below.

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
There are no enabled repos.
 Run "yum repolist all" to see the repos you have.
 To enable Red Hat Subscription Management repositories:
     subscription-manager repos --enable <repo>
 To enable custom repositories:
     yum-config-manager --enable <repo>
[22:09][~]# cat /etc/redhat-release
 CentOS Linux release 7.3.1611 (Core)

Check your version and reinstall using this way.

[22:09][~]# yum reinstall http://vault.centos.org/7.3.1611/os/x86_64/Packages/centos-release-7-3.1611.el7.centos.x86_64.rpm


Publishing Failed in WordPress

When you post in WordPress with over 5MB text post, you may receive this error with red color.

Publishing Failed

or you found in your php error log

PHP Fatal error:  Maximum execution time of 60 seconds exceeded in /home/nginx/domains/juzhax.com/public/wp-includes/formatting.php on line 2295

This is because WordPress need more time to execute and process and formatting your post over the maximum execution time.

You may use the way below to solve this.

Edit the file wp-config.php and add in the line


@ini_set('max_execution_time', 1800);

But, use this with caution, because if many of your script in WordPress over this time, this way maybe kill your server.

File ./vendor/autoload.php missing or not readable. in centminmod phpMyAdmin

After installed the phpMyAdmin in centminmod, I saw this error message.

File ./vendor/autoload.php missing or not readable.
Most likely you did not run Composer to install library files.

I like to fix it manually and access to my server shell and run these command.

cd /usr/local/nginx/html/1111_mysqladmin12345
git pull
rm -rf composer.phar
wget -cnv https://getcomposer.org/composer.phar -O composer.phar
php composer.phar update --no-dev

To make sure every time update phpmyadmin and update the composer too, you should modify the file

/root/tools/phpmyadmin_update.sh

and add in these lines.

git pull
rm -rf composer.phar
wget -cnv https://getcomposer.org/composer.phar -O composer.phar
php composer.phar update --no-dev

Install Tinyproxy on Centos 7

Tinyproxy is a light-weight HTTP/HTTPS proxy daemon for POSIX operating systems.
Designed from the ground up to be fast and yet small, it is an ideal solution for use cases such as embedded deployments where a full featured HTTP proxy is required, but the system resources for a larger proxy are unavailable.

yum install -y epel-release
yum update -y
yum -y install tinyproxy
yum install vim -y

vim /etc/tinyproxy/tinyproxy.conf

Search for

Port 8888

Then
Search for:

Allow xxx.xxx.xxx.xxx

If you want to let it connect from anywhere then just comment it, but I’m not recommended, because it will allow other user connect in.

To test from the Allowed server to the TinyProxy Server

ssh [email protected] -L 1234:localhost:8888 -N

curl -I https://juzhax.com/ --proxy [email protected]:8888

mongoexport and mongoimport with query from one host to one host

I would like to query the data that I need only from a server to another server, and I just use one line of command in linux shell

mongoexport -h fromHost.com -d fromDB -c fromCollection -q '{ count: { $gte: 1 } }' | mongoimport -h toHost.com -d toNewDB -c toNewCollection

If your date is many GB, you can run it in background using nohup

nohup "mongoexport -h fromHost.com -d fromDB -c fromCollection -q '{ count: { \$gte: 1 } }' | mongoimport -h toHost.com -d toNewDB -c toNewCollection" &

If you want to view the current process

tail nohup.out -f

It will output something like

2016-05-17T02:34:47.822+0700	imported 1431218 documents
2016-05-17T02:36:40.240+0700	connected to: localhost
2016-05-17T02:36:40.243+0700	connected to: db.fromHost.com
2016-05-17T02:36:41.244+0700	db.collection 1000
2016-05-17T02:36:42.243+0700	db.collection  56000
2016-05-17T02:36:43.239+0700	db.collection0517	11.5 MB
2016-05-17T02:36:43.243+0700	db.collection  88000
2016-05-17T02:36:44.244+0700	db.collection  128000
2016-05-17T02:36:45.243+0700	db.collection  160000
2016-05-17T02:36:46.239+0700	db.collection0517	24.4 MB
.....
.....

Fast way to find duplicate data in MongoDB

I need to find out the duplicate data content in my 40 Millions records, then I can make the unique index to my name field.

> db.collecton.aggregate([
...     { $group : {_id : "$field_name", total : { $sum : 1 } } },
...     { $match : { total : { $gte : 2 } } },
...     { $sort : {total : -1} },
...     { $limit : 5 }],
... { allowDiskUse: true}    
...     );

{ "_id" : "data001", "total" : 2 }
{ "_id" : "data004231", "total" : 2 }
{ "_id" : "data00751", "total" : 2 }
{ "_id" : "data0021", "total" : 2 }
{ "_id" : "data001543", "total" : 2 }
> 

{ allowDiskUse: true} is optional if your data is not huge.

{ $limit : 5 }, you can set display more data.

ERROR: failed to ptrace(PEEKDATA) pid 17402: Input/output error (5)

You may found this error from your php-fpm and the php-fpm crash

tail /var/log/php-fpm/error.log
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 17402: Input/output error (5)
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 17777: Input/output error (5)
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 18886: Input/output error (5)
[15-May-2016 12:25:53] ERROR: failed to ptrace(PEEKDATA) pid 17232: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 12091: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 16704: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 17779: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 19015: Input/output error (5)
[15-May-2016 12:30:53] ERROR: failed to ptrace(PEEKDATA) pid 20663: Input/output error (5)
[15-May-2016 12:30:53] ERROR: failed to ptrace(PEEKDATA) pid 21002: Input/output error (5)

Solution to stop ERROR: failed to ptrace(PEEKDATA)

You can just comment out the php-fpm config

vim /etc/php-fpm.d/www.conf

then comment out

;slowlog = /var/log/php-fpm/slow.log
;request_slowlog_timeout = 5s

How to clear nginx cache manually ?

Normally I like to do manually job.

assume that the default path of your nginx cache is /var/cache/nginx

find /var/cache/nginx -type f -delete

This will clear all your cache in one line.

Delete a single file from nginx cache

If you just want to delete a single file from nginx cache, you can try this.

grep -lr '//juzhax.com/wp-content/plugins/jetpack/modules/wpgroho.js' /var/cache/nginx*

Then it will show something like this
/var/cache/nginx/8/45/6025f6b505cd8cbc1172d4e541ac3458

You can safely remove by using rm

rm /var/cache/nginx/8/45/6025f6b505cd8cbc1172d4e541ac3458

How to resize OVH Public Cloud disk in CentOS 7 linux

I’m going to resize 100G SSD to 150GB SSD in Public Cloud

First I check the current mounted disk

[[email protected] ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G  2.4G   36G   7% /
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G   17M  3.8G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           780M     0  780M   0% /run/user/1000
/dev/vdb         99G   84G  9.7G  90% /mnt/data

Then I’m going to umount it

[[email protected] ~]# umount /dev/vdb
[[email protected] ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G  2.4G   36G   7% /
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G   17M  3.8G   1% /run

I use the OVH interface to Backup first, then Detach the disk, and resize it to 150GB then Save.
And now back to the shell.

[[email protected] ~]# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000161a3

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    83883491    41940722   83  Linux

Disk /dev/vdb: 161.1 GB, 161061273600 bytes, 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Use e2fsck to chekc the file systems.

[[email protected] ~]# e2fsck /dev/vdb
e2fsck 1.42.9 (28-Dec-2013)
/dev/vdb: clean, 62/6553600 files, 22382398/26214144 blocks

[[email protected] ~]# resize2fs /dev/vdb
resize2fs 1.42.9 (28-Dec-2013)
Please run 'e2fsck -f /dev/vdb' first.

[[email protected] ~]# e2fsck -f /dev/vdb
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vdb: 62/6553600 files (56.5% non-contiguous), 22382398/26214144 blocks

Resize using resize2fs

[[email protected] ~]# resize2fs /dev/vdb
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/vdb to 39321600 (4k) blocks.
The filesystem on /dev/vdb is now 39321600 blocks long.

Mount again to our system.

[[email protected] ~]# mount /dev/vdb /mnt/data
[[email protected] ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G  2.4G   36G   7% /
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G   17M  3.8G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           780M     0  780M   0% /run/user/1000
/dev/vdb        148G   84G   57G  60% /mnt/data

You can see the Size increased.
Always backup before you do any change to disk.
Please take your own risk on these command in Linux.

lazy-result.js ReferenceError: Promise is not defined

node_modules/postcss/lib/lazy-result.js:157
        this.processing = new Promise(function (resolve, reject) {
                              ^
ReferenceError: Promise is not defined

When I install the FoundationPress, after I run gulp build I saw this message,
this is because the node version problem.

the have mention in this post
https://github.com/postcss/postcss-nested/issues/30

Solution

vim node_modules/postcss/lib/lazy-result.js

Put this to the first line of the file lazy-result.js

require('es6-promise').polyfill();

Save.

Then install

npm install es6-promise

Then build again

[[email protected]]# gulp build
[18:14:07] Starting 'clean'...
[18:14:07] Starting 'clean:javascript'...
[18:14:07] Starting 'clean:css'...
[18:14:07] Finished 'clean:javascript' after 4.56 ms
[18:14:07] Finished 'clean:css' after 2.98 ms
[18:14:07] Finished 'clean' after 6.38 ms
[18:14:07] Starting 'build'...
[18:14:07] Starting 'copy'...
[18:14:07] Finished 'copy' after 103 ms
[18:14:07] Starting 'sass'...
[18:14:08] Starting 'javascript'...
[18:14:08] Starting 'lint'...
[18:14:10] Finished 'lint' after 1.53 s
[18:14:10] Finished 'sass' after 2.58 s
[18:14:14] Finished 'javascript' after 6.12 s
[18:14:14] Finished 'build' after 6.78 s

Success !