I want to import my pictures tags made in F-Spot (0.8.2) to digiKam (4.10)
I found Roland Geider script but it doesn't work in totay's Linux (differences in DBUS structure): Original F-Spot to digiKam script.
I've made trivial correction (comment-out problematic code) and now script looks like this:
You have to provide collection with the same path like in F-Spot. It can import tags.
If your F-Spot path doesn't exists anymore you have to alter photos.db. In my case some of pictures had path /home/zdjatka and some /mnt/zdjatka, but now all pictures are in the same place /mnt/zdjatka, so I alter path:
cp ~/.config/f-spot/photos.db ~/photos.db sqlite3 ~/photos.db update photos \ set base_uri=REPLACE(base_uri, "home", "mnt") \ where base_uri not like "%/mnt/zdjatka%"; update photo_versions \ set base_uri=REPLACE(base_uri, "home", "mnt") \ where base_uri not like "%/mnt/zdjatka%"; .quit
as you can see photos.db is sqlite3 database (usually located at ~/.config/f-spot/photos.db) and I copied file to home directory (not to modify original file).
Assuming you digiKam is initialized in Pictures directory and you have defined collection in same folder as is defined in F-Spot database (photos.db) complete command to convert is:
python fspot_to_digikam.py Pictures --fspot-folder .
If you want to be able run application as Administrator you have to use QuickSupport version of TeamViewer and follow Controlling the UAC on remote Windows PC.
Only with this version of TeamViewer you are able to see UAC window and enter your username and password.
So correct steps are:
- Start TeamViewer on your computer.
- Ask your partner to start TeamViewer QuickSupport on his computer.
- Ask your partner for his TeamViewer ID shown in the TeamViewer application.
- Select the option Remote support and enter your partner's TeamViewer ID in the field Partner ID (Windows authentication can be used for all other connection modes as well).
- Click on Connect to Partner.
- Click on Advanced. The dialogue displays the advanced settings.
- In the drop-down field, set the authentication method to Windows.
- Enter your Windows (Admin) login, the domain (if used) and the Windows password.
- Click on Log On.
You are now connected to your partner’s computer and can control the UAC as you wish.
If you use ReadyNAS OS 6 you can install Pi-hole in Docker container:
docker run --name my-new-pi-hole \ -e ServerIP=191.168.1.28 \ -e DNS1=192.168.1.4 \ -e WEBPASSWORD=secret \ -d \ -p 8080:80 -p 53:53 -p 53:53/udp \ diginc/pi-hole
you don't have to run
docker pull diginc/pi-hole
it will be done automatic (or rather automagic).
As You can see I've set up several parameters:
- run is command to docker to run container
- --name my-new-pi-hole is name of container (optional)
- -e means Environment Variable - it is to pass some options to container
- ServerIP - it is IP that Pi-hole should use (mandatory)
- DNS1 - IP of your DNS server (optional) - if you not provide it will use google's DNS
- WEBPASSWORD (useful) - you will need it to login to administration portal
- -d - tells to docker that this container should work in background
- -p - is port redirection from your docker machine to port in container
- 8080:80 (useful) - is redirection of 8080 port of your docker machine to port 80 in container
- 53:53 (optional)- is redirection of 53 port of your docker machine to port 53 in container (not necessary in my case)
- 53:53/udp (very useful) - is the same for UDP port - this is what we need to work
- diginc/pi-hole - is of course name of image to run
Now you can use command
docker ps
to get some information about your container
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 891aeb2c127b diginc/pi-hole "/s6-init" 7 seconds ago Up 2 seconds (health: starting) 0.0.0.0:53->53/tcp, 0.0.0.0:53->53/udp, 0.0.0.0:8080->80/tcp my-new-pi-hole
as you can see status is health: starting, after some time it will change to healthy.
Now you can login to (192.168.1.28 is address of may docker machine) http://192.168.1.28:8080/admin/ to check that no one has used your server (Total queries 0).
Let's check if it works:
root@mynas:~# host jaqb.gda.pl 192.168.1.28 Using domain server: Name: 192.168.1.28 Address: 192.168.1.28#53 Aliases: jaqb.gda.pl has address 185.204.216.106 jaqb.gda.pl mail is handled by 0 mail.jaqb.gda.pl.
Yes!
Now it's enough to change default DNS in your LAN to your docker machine (in my case 192.168.1.28).
If your proxy server (squid and/or dansguardian) is not on your gateway you can also set it up to be transparent.
In my examples gateway ha address 192.168.1.1, and squid server has address 192.168.1.28 and listen on 8080 port.
There is configuration using iptables directly:
iptables -t nat -I PREROUTING -i eth0 -s ! 192.168.1.28 -p tcp --dport 80 -j DNAT --to 192.168.1.28:8880 iptables -t nat -I POSTROUTING -o eth0 -s 192.168.1.0/24 -d 192.168.1.28 -j SNAT --to 192.168.1.1 iptables -I FORWARD -s 192.168.1.0/24 -d 168.13.28 -i eth0 -o eth0 -p tcp --dport 8880 -j ACCEPT
and this is the same in Gargoyle /etc/config/firewall file (you can edit it or use uci add firewall commands):
config redirect option name 'P12 to Squid DNAT' option src 'lan' option proto 'tcp' option dest_port '8080' option src_dport '80' option src_dip '! 192.168.1.1' option dest_ip '192.168.1.28' option src_ip '! 192.168.1.28' config redirect option name 'P12 to Squid SNAT' option dest 'lan' option proto 'tcp' option src_dip '192.168.1.1' option dest_ip '192.168.1.28' option src_ip '192.168.1.0/24' option target 'SNAT' config rule option name 'P12 to Squid' option dest 'lan' option dest_port '8080' option proto 'tcp' option src_ip '192.168.1.0/24' option dest_ip '192.168.1.28' option target 'ACCEPT'
after editing /etc/config/firewall file you have to restart firewall:
/etc/init.d/firewall restart
To allow Spyder to access internet when behind a proxy server I used environment variables HTTP_PROXY and HTTPS_PROXY.
In Anaconda Prompt I typed:
set HTTP_PROXY=192.168.1.2:8080 set HTTPS_PROXY=192.168.1.2:8080 pip install graphviz
Settings in ~/.condarc file (as described in Using the .condarc...) didn't works:
proxy_servers: http: http://192.168.1.2:8080 https: https://192.168.1.2:8080
Today I wrote my firs application for ReadyNAS OS. It's very, very simple. It just for Proxy Auto Configuration (PAC) / Web Proxy Autodiscovery Protocol (WPAD) so I called it wpad ;-)
The whole "application" serve only one file wpad.dat. It fits in two files: /apps/wpad/http.conf and /apps/wpad/wpad.dat itself. The content of /apps/wpad/http.conf is:
<VirtualHost *:80> ServerAdmin admin@localhost ServerName wpad DocumentRoot /apps/wpad ErrorLog /apps/wpad/error.log LogLevel warn </VirtualHost>
Now when apache starts it creates link:
root@NAS:~# ll /etc/apache2/sites-enabled/090-wpad.conf lrwxrwxrwx 1 root root 20 Mar 22 21:15 /etc/apache2/sites-enabled/090-wpad.conf -> /apps/wpad/http.conf
To more detailed info about applications for ReadyNAS OS see ReadyNAS Applications Specification.
To have WPAD worked I had to configure also my router:
To resolve wpad as 192.168.1.12 (this is address of my NAS) in /etc/dnsmasq.conf I added:
address=/wpad/192.168.1.12
and to in /etc/config/dhcp file in section config dhcp 'lan':
list dhcp_option '252,http://wpad/wpad.dat'
To backup embedded systems, such as Android phone or OpenWRT router you can use tar command in conjunction with ssh. As I mentioned before this method is almost successful with Android. Now example with Gargoyle router - called device.
Similar to Android devices OpenWRT's tar doesn't support --totals option, so I've prepared wrapper (see below) because it is necessary for BackupPC.
You will need:
- Copy your generated earlier id_rsa.pub to /etc/dropbear/authorized_keys on device. To be able to login without password.
Login to device manually for the first time (to add it to known_hosts):
sudo -u backuppc /usr/bin/ssh -q -x -n -l root router
as you can see I logged as root to device named router. Alternatively you can manually edit known_hosts.
On device create file (ie. /root/tar_totals.bash) to emulate --totals option behaviour:
root@router:~# cat /root/tar_totals.bash tar $* echo "Total bytes written: 10240 (10KiB, 3.6MiB/s)" >&2
and of course make it executable:
chmod u+x /root/tar_totals.bash
(this what I can't do on Android)
Define host in BackupPC with settings:
$Conf{BackupFilesExclude} = {}; $Conf{BackupFilesOnly} = { '/' => [ 'etc' ] }; $Conf{TarClientCmd} = '$sshPath -q -x -n -l root $host $tarPath -c -v -f - -C $shareName+'; $Conf{TarClientPath} = '/root/tar_totals.bash'; $Conf{TarShareName} = [ '/' ]; $Conf{XferMethod} = 'tar';
as you can see only /etc is copied, and I use wrapper instead of tar command.
Now you can backup it. Of course in log you will have:
Running: /usr/bin/ssh -q -x -n -l root router /root/tar_totals.bash -c -v -f - -C / ./etc full backup started for directory / Xfer PIDs are now 23732,23731 Total bytes written: 10240 (10KiB, 3.6MiB/s) (...)
where you can see fake transfer summary.
Unfortunately on Android device you cannot set file as executable:
HWVTR:/storage/emulated/0/ssh $ chmod u+x tar_totals.bash chmod: chmod 'tar_totals.bash' to 100760: Operation not permitted
After you manage to ssh to your android device. You can set it up to backup with BackupPC.
Since rsync command is not available You can use tar. In configuration you have to set:
$Conf{XferMethod} = 'tar';
change path to tar:
$Conf{TarClientPath} = 'tar';
and command to execute:
$Conf{TarClientCmd} = '$sshPath -q -x -n -p 2222 $host env LC_ALL=C $tarPath -c -v -f - -C $shareName+';
as you can see I've removed -l root and --totals.
Complete config is:
$Conf{XferMethod} = 'tar'; $Conf{TarClientPath} = 'tar'; $Conf{TarShareName} = [ '/storage/emulated/0/DCIM/Camera/' ]; $Conf{TarClientCmd} = '$sshPath -q -x -n -p 2222 $host env LC_ALL=C $tarPath -c -v -f - -C $shareName+';
There are still some problems because backup is ended with an error:
Running: /usr/bin/ssh -q -x -n -p 2222 huawei_p10 env LC_ALL=C tar -c -v -f - -C /storage/emulated/0/DCIM/Camera . full backup started for directory /storage/emulated/0/DCIM/Camera Xfer PIDs are now 19406,19405 (...) tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 78 filesTotal, 227795841 sizeTotal Backup aborted () Not saving this as a partial backup since it has fewer files than the prior one (got 78 and 0 files versus 1814)
You can backup android phone/tabled (device) with rsync installed on some host and SimpleSSHD installed on device.
First you have to check if you can ssh to device:
To login first time You have to run SimpleSSHD and press START to run it in foreground. From some host login to device using command:
sudo -u backuppc /usr/bin/ssh 192.168.1.241 -p 2222
enter password displayed in SimpleSSHD windows on device, you will see something like this:
no authorized_keys, generating single-use password -------- yrGAex8s --------
As you can see my device has address 192.168.1.241 and server is using his default port 2222. You don't have to provide any user name (ie. -l someuser).
Now you can backup device manually using rsync:
rsync --update -e 'ssh -p 2222' -azv \ 192.168.1.241:/storage/emulated/0/DCIM/Camera/ /data/myPhoneBackup/
Another approach is to use Rsync Wrapper by Letscorp (I didn't tested it yet).
The main target is to make android device be able to backup using BackupPC.
Installation
yum install https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm yum install puppetserver
add following lines to /etc/puppetlabs/puppet/puppet.conf, of course, change puppetmaster to your host name:
dns_alt_names = puppet,puppetmaster [agent] runinterval = 1m
you can reduce memory usage - edit /etc/sysconfig/puppetserver file and change JAVA_ARGS to:
JAVA_ARGS="-Xms256m -Xmx384m -XX:MaxPermSize=256m"
then run service:
systemctl enable puppetserver systemctl start puppetserver
First manifests
To communicate with agents puppet use 8140 port - we try to open it using puppet in three steps:
- Install module to manage firewalld
- Create new puppet service
- Enable this service for public zone
Module installation
First we need module to manage firewalld - to install it create manifest file, ie. puppet.test.firewalld-install.pp (as you can see I used /root directory, but it does'n matter):
$module_firewalld = 'crayfishx-firewalld' exec { 'install-puppet-module-firewalld': command => "puppet module install ${module_firewalld}", unless => "puppet module list | grep ${module_firewalld}", path => ['/bin', '/opt/puppetlabs/bin'] }
and test (--noop) first manifest:
[root@puppetmaster ~]# puppet apply puppet.test.firewalld-install.pp --noop Notice: Compiled catalog for puppetmaster.lan in environment production in 0.18 seconds Notice: /Stage[main]/Main/Exec[install-puppet-module-firewalld]/returns: current_value notrun, should be 0 (noop) Notice: Class[Main]: Would have triggered 'refresh' from 1 events Notice: Stage[main]: Would have triggered 'refresh' from 1 events Notice: Applied catalog in 1.45 seconds
after this you can apply (without --noop):
[root@puppetmaster ~]# puppet apply puppet.test.firewalld-install.pp Notice: Compiled catalog for puppetmaster.lan in environment production in 0.18 seconds Notice: /Stage[main]/Main/Exec[install-puppet-module-firewalld]/returns: executed successfully Notice: Applied catalog in 14.33 seconds
of course you could just run:
puppet module install crayfishx-firewalld
but it would be too simple ;-)
Create service
Now we can create firewalld service. Create file puppet.test.firewalld-service.pp:
firewalld::custom_service{'puppet': short => 'puppet', description => 'Puppet Client access Puppet Server', port => [ { 'port' => '8140', 'protocol' => 'tcp', }, { 'port' => '8140', 'protocol' => 'udp', }, ], }
and apply this:
[root@puppetmaster ~]# puppet apply puppet.test.firewalld-service.pp Notice: Compiled catalog for puppetmaster.lan in environment production in 0.27 seconds Notice: /Stage[main]/Main/Firewalld::Custom_service[puppet]/File[/etc/firewalld/services/puppet.xml]/ensure: defined content as '{md5}3fc4d356e7cb57739c8ceb8a0b483eaa' Notice: /Stage[main]/Main/Firewalld::Custom_service[puppet]/Exec[firewalld::custom_service::reload-puppet]: Triggered 'refresh' from 1 events Notice: Applied catalog in 1.32 seconds
(I cut warnings about depreciated validate functions)
As you can see there is a new file:
[root@puppetmaster ~]# more /etc/firewalld/services/puppet.xml <?xml version="1.0" encoding="utf-8"?> <service> <short>puppet</short> <description>Puppet Client access Puppet Server</description> <port protocol="tcp" port="8140" /> <port protocol="udp" port="8140" /> </service>
Again we could create this file manually, but what for?
Enable service
It's time to use this service. Create file puppet.test.firewalld-apply.pp:
firewalld_service { 'Allow puppet from the public zone': ensure => 'present', service => 'puppet', # zone => 'external', }
(because public id default parameter zone can be omitted) and apply this:
[root@puppetmaster ~]# puppet apply puppet.test.firewalld-apply.pp Notice: Compiled catalog for puppetmaster.lan in environment production in 0.15 seconds Notice: /Stage[main]/Main/Firewalld_service[Allow puppet from the public zone]/ensure: created Notice: Applied catalog in 2.04 seconds
and check:
[root@puppetmaster ~]# firewall-cmd --list-services ssh dhcpv6-client dhcp dns puppet
This way is more interesting than boring:
firewall-cmd --add-service=puppet firewall-cmd --permanent --add-service=puppet firewall-cmd --reload
Summary
As you can see setting up a test puppet master (and use them) is not so difficult. Configuration files and manifests are easy to understand.
Next step is to manage agents on other hosts...
Add comment