When you want to use both of them to have to decide to which the clients will be connecting to. In my case packets goes to Dansguardian and then to Squid and finally to internet.
In standard configuration only Dansguardian knows clients IPs.
To provide user's IPs to Squid change Dansguardian configuration to forward them - change in dansguardian.conf (or dansguardianf1.conf, or whatever you have):
forwardedfor = on usexforwardedfor = on
Be careful - if you don't use Squid (or some other secure proxy) you can publish you private IP addresses to whole world.
Then change Squid configuration (squid.conf) to allow to find the original source:
follow_x_forwarded_for allow localhost
Of course restart both services.
It's pretty simple, but I forgot about this.
On CentOS 7 you can log all commands to syslog an then to local file or even to remote server.
Send all commands to syslog
Create file /etc/sysconfig/bash-prompt-xterm:
RETRN_VAL=$?;logger -p local6.debug "$(whoami) [$$]: $(history 1 | sed "s/^[ ]*[0-9]\+[ ]*//" ) [$RETRN_VAL]"
and change, to be executable:
chmod a+x /etc/sysconfig/bash-prompt-xterm
Configure syslog to send messages from local6 facility to separate file
Create file /etc/rsyslog.d/bash.conf:
local6.* /var/log/commands.log
finally:
service restart rsyslog
Now you can monitor commands:
tail -f /var/log/commands.log
Log command using audit
Alternatively you can use audit - create /etc/audit/rules.d/bash_history.rules:
-a exit,always -F arch=b64 -S execve -a exit,always -F arch=b32 -S execve
but logs are not very human friendly:
grep EXECVE /var/log/audit/audit.log
and you may also want to log execvp, execl, execveat etc.
Sources:
https://askubuntu.com/questions/93566/how-to-log-all-bash-commands-by-all-users-on-a-server
https://unix.stackexchange.com/questions/86000/how-can-you-log-every-command-typed
http://whmcr.com/2011/10/14/auditd-logging-all-commands/
I've upgraded Pi-hole do 4.0 version:
docker run -d --name pihole-4.0.0-1 \ --dns 127.0.0.1 --dns 1.1.1.1 \ -e ServerIP=192.168.1.28 -e DNS1=192.168.1.4 \ -e WEBPASSWORD=secret \ -p 8018:80 -p 53:53 -p 53:53/udp \ --restart=unless-stopped \ pihole/pihole:4.0.0-1
as you can see I use new name (pihole/pihole), set power policy and additional --dns option - it's necessary.
Last time I've to manually edit /etc/resolve.conf inside container - this is simple workaround.
To backup running QEMU/KVM machine I use script made by Daniel Berteaud. I've installed it in this way:
cd /usr/local/sbin/ wget "http://gitweb.firewall-services.com/?p=virt-backup;a=blob_plain;f=virt-backup;hb=HEAD" -O virt-backup-0.2.17-1.pl ln -s virt-backup-0.2.17-1.pl virt-backup chmod u+x virt-backup-0.2.17-1.pl
as you can see current version is 0.2.17-1
There is also a fork of virt-backup: github.com/vazhnov/virt-backup.pl - it's based on some earlier version.
Example command to backup centos-test machine (name shown by virsh list) to /mnt/backups directory.
virt-backup --action=dump --no-snapshot --compress --shutdown --shutdown-timeout=300 --vm=centos-test --backupdir=/mnt/backups
If you use ReadyNAS OS 6 you can install Pi-hole in Docker container:
docker run --name my-new-pi-hole \ -e ServerIP=191.168.1.28 \ -e DNS1=192.168.1.4 \ -e WEBPASSWORD=secret \ -d \ -p 8080:80 -p 53:53 -p 53:53/udp \ diginc/pi-hole
you don't have to run
docker pull diginc/pi-hole
it will be done automatic (or rather automagic).
As You can see I've set up several parameters:
- run is command to docker to run container
- --name my-new-pi-hole is name of container (optional)
- -e means Environment Variable - it is to pass some options to container
- ServerIP - it is IP that Pi-hole should use (mandatory)
- DNS1 - IP of your DNS server (optional) - if you not provide it will use google's DNS
- WEBPASSWORD (useful) - you will need it to login to administration portal
- -d - tells to docker that this container should work in background
- -p - is port redirection from your docker machine to port in container
- 8080:80 (useful) - is redirection of 8080 port of your docker machine to port 80 in container
- 53:53 (optional)- is redirection of 53 port of your docker machine to port 53 in container (not necessary in my case)
- 53:53/udp (very useful) - is the same for UDP port - this is what we need to work
- diginc/pi-hole - is of course name of image to run
Now you can use command
docker ps
to get some information about your container
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 891aeb2c127b diginc/pi-hole "/s6-init" 7 seconds ago Up 2 seconds (health: starting) 0.0.0.0:53->53/tcp, 0.0.0.0:53->53/udp, 0.0.0.0:8080->80/tcp my-new-pi-hole
as you can see status is health: starting, after some time it will change to healthy.
Now you can login to (192.168.1.28 is address of may docker machine) http://192.168.1.28:8080/admin/ to check that no one has used your server (Total queries 0).
Let's check if it works:
root@mynas:~# host jaqb.gda.pl 192.168.1.28 Using domain server: Name: 192.168.1.28 Address: 192.168.1.28#53 Aliases: jaqb.gda.pl has address 185.204.216.106 jaqb.gda.pl mail is handled by 0 mail.jaqb.gda.pl.
Yes!
Now it's enough to change default DNS in your LAN to your docker machine (in my case 192.168.1.28).
If your proxy server (squid and/or dansguardian) is not on your gateway you can also set it up to be transparent.
In my examples gateway ha address 192.168.1.1, and squid server has address 192.168.1.28 and listen on 8080 port.
There is configuration using iptables directly:
iptables -t nat -I PREROUTING -i eth0 -s ! 192.168.1.28 -p tcp --dport 80 -j DNAT --to 192.168.1.28:8880 iptables -t nat -I POSTROUTING -o eth0 -s 192.168.1.0/24 -d 192.168.1.28 -j SNAT --to 192.168.1.1 iptables -I FORWARD -s 192.168.1.0/24 -d 168.13.28 -i eth0 -o eth0 -p tcp --dport 8880 -j ACCEPT
and this is the same in Gargoyle /etc/config/firewall file (you can edit it or use uci add firewall commands):
config redirect option name 'P12 to Squid DNAT' option src 'lan' option proto 'tcp' option dest_port '8080' option src_dport '80' option src_dip '! 192.168.1.1' option dest_ip '192.168.1.28' option src_ip '! 192.168.1.28' config redirect option name 'P12 to Squid SNAT' option dest 'lan' option proto 'tcp' option src_dip '192.168.1.1' option dest_ip '192.168.1.28' option src_ip '192.168.1.0/24' option target 'SNAT' config rule option name 'P12 to Squid' option dest 'lan' option dest_port '8080' option proto 'tcp' option src_ip '192.168.1.0/24' option dest_ip '192.168.1.28' option target 'ACCEPT'
after editing /etc/config/firewall file you have to restart firewall:
/etc/init.d/firewall restart
I tried to monitor my flat using old phone as web camera (should be replaced with IP camera). It's quite simple using one of many Android apps (IP Webcam by Pavel Khlebovich in my case).
Since I do not have public IP I had to use OpenVPN to my home network:
![]() |
My phone/camera has address 192.168.1.11 (Home LAN network). To have access to it from Internet, I had to add this to cameras.conf file (in /etc/httpd/conf.d directory):
<Location "/cam1"> ProxyPass "http://192.168.1.11:8080/" ProxyPassReverse "http://192.168.1.11:8080/" </Location>
Now I can see what is happening typing http://cameras.jaqb.gda.pl/cam1 in web browser.
What is worse anyone can see. To protect my own privacy I've added authentication, so whole file looks like this:
<VirtualHost *:80> <Location "/cam1"> ProxyPass "http://192.168.1.11:8080/" ProxyPassReverse "http://192.168.1.11:8080/" AuthType Basic AuthName "Acess to my first camera" AuthBasicProvider ldap AuthLDAPURL "ldap://localhost/dc=jaqb,dc=gda,dc=pl" NONE AuthLDAPBindDN "cn=administator,dc=jaqb,dc=gda,dc=pl" AuthLDAPBindPassword typeyourpasswordhere AuthLDAPGroupAttribute memberUid AuthLDAPGroupAttributeIsDN off Require ldap-user jakub </Location> </VirtualHost>
Now only user jakub (me ;-) can have access.
For identifying admin-permissions issues in desktop applications, ie. when everybody says to disable User Account Control (UAC), you can use LUA Buglight.
If you want to create certificates for OpenVPN you have to set two additional options in section usr_cert in /etc/pki/tls/openssl.cnf file:
nsCertType = server | client keyUsage = digitalSignature
of course nsCertType have to be server when you generate certificate for OpenVPN server and client for client certificates.
This options are required when you OpenVPN options ns-cert-type and remote-cert-tls.
Sources:
Edit /etc/pki/tls/openssl.cnf file and change in CA_default section, of course You have to adopt it, ie. change name to yours ;-):
dir = /etc/pki/CA2017 certificate = $dir/newCAcert.pem private_key = $dir/private/newCAkey.pem countryName_default = PL stateOrProvinceName_default = pomorskie localityName_default = Gdynia 0.organizationName_default = Jakub Walczak
because, I've changed default directory I have to create some files and directories:
mkdir /etc/pki/newCA cd /etc/pki/newCA mkdir {certs,crl,newcerts,private} touch index.txt echo 00 > serial
Generate self-signed CA certificate for 5 years (5*365=1825 days):
openssl req -x509 -newkey rsa:2048 -keyout private/newCAkey.pem \ -out newCAcert.pem -days 1825
It's very important to remember password! You will need them for 5 years every time you will generate certificate.
Generate first certificate:
openssl req -newkey rsa:2048 -nodes -keyout private/vpn.key \ -out vpn.csr
In most cases you have to provide only Common Name. Now you have new vpn.crs (certificate signing request) file.
Sign it:
openssl ca -in vpn.csr -out certs/vpn.pem
Of course you have to enter CA password.
You have to revoke old certificate if you want to generate new certificate with same Common Name. To revoke certificate use command:
openssl ca -revoke certs/vpn.pem
That's all.
To convert data to human readable data:
- Certificate Signing Request (CSR)
openssl req -text -noout -verify -in vpn.csr
- Private key
openssl rsa -in private/vpn.key -check
- Certificate
openssl x509 -in certs/vpn.pem -text -noout
Add comment