Ubuntu cannot connect to Cisco Router

Please follow these steps if you cannot can connect to a router. For example

ssh [email protected]

Output

Unable to negotiate with blog.wapnet.nl port 22: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1

use the -o parameter

ssh -o KexAlgorithms=+diffie-hellman-group1-sha1 [email protected]

Output:

Unable to negotiate with blog.wapnet.nl port 22: no matching cipher found. Their offer: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc

and add the -c parameter

ssh -o KexAlgorithms=+diffie-hellman-group1-sha1 -c aes256-cbc  [email protected]

and you are connected 🙂

The authenticity of host 'blog.wapnet.nl' can't be established.
RSA key fingerprint is SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Are you sure you want to continue connecting (yes/no/[fingerprint])?

Fix Screen Tearing in Linux

It cost me a lot of time to troubleshoot the screen tearing. So I want to share my solution for everyone with the same frustrating tearing issues. My private setup is a Lenovo Ideapad (gaming) with Nvidia and Intel (Prime) GPU. I use the laptop screen and an external 24″ HDMI display.

In Windows 10 everything goes smooth but when I switch my dual boot to Linux than the frustrations begin.

I tried a lot of different Linux distributions (Fedora, Solus, Ubuntu, Pop_OS!, Arch Linux, OpenSuse, and Zorin OS) and I try KDE Plasma, Gnome, and Budgie a lot of hacks for all these systems to get a smooth Linux GUI without screen tearing or other lag.

But I don’t like manual modifications/hacks to get the Nvidia setup smooth. Especially the proprietary Nvidia drivers can break your system easily. So this fix is easy to remember and easy to switch if you want the power saver back to full and use the Intel driver instead of the Nvidia one.

Important, choose your display!

What you have to keep in mind when you have a lot of screen tearing is to make a decision. Use your laptop display or external display. You can fix both displays but not at the same time in a smooth way. If I found a solution in the feature I will post it on my blog. But in the meanwhile, I use only one screen at the same time.

This procedure is for Ubuntu 20.04 LTS but it will work for other distributions for sure.

Install the driver

After a fresh ubuntu installation, Nvidia and HDMI do not work as they should be. So, kick off the first command and reboot.

$ sudo ubuntu-drivers autoinstall

And reboot!

Configure Nvidia driver Part I

Now configure the Nvidia/Intel prime in on-demand mode. So you can do further configuration in the nvidia-settings gui

$ sudo prime-select on-demand

  • And reboot again
  • Now start Nvidia settings and switch to NVIDIA (Performance Mode)
  • Click save and….. reboot!

Turn off one display

Go to your KDE or Gnome display settings and turn off one display. KDE saves these settings so when you plug out the HDMI cable afterward KDE will activate your laptop display. And of course, when you plug in your HDMI cable next time he will turn off your laptop display.

Configure Nvidia driver Part II

These steps are optional but needed for a better performance.

  • Start nvidia-settings > GPU 0 > PowerMixer
  • Change the Preffered Mode to “Prefer Maximum Performance”

Last but not least: Firefox hardware rendering

For some compatibility reasons, hardware rendering in Firefox is turned off by default. So you have to enable it.

  • about:config
  • Set layers.acceleration.force-enabled to true
  • Quit Firefox and restart it

Some debug information

I always use this YouTube video to check if the screen tearing is varnished completely

When you want to see if your have the right drivers loaded, use this command:

$ lspci -k | grep -EA3 'VGA|3D|Display'

And you can start an application with max video power with this command. Change gears to your own program off course. But I like gears because you can see the FPS realtime (see the screen what a differance 😉 )

$ __NV_PRIME_RENDER_OFFLOAD=1 glxgears

With nvidia-smi you can see if nvidia is running the application

Enjoy your tearing free Linux 🙂

/edit

Hahaha I saw this video today 🙂 Exactly my thoughts

Install Synology Drive Client on Solus Linux

The current 3rth party repo is deprecated so I’ve created my own fork for version: 2.0.4-11112

sudo eopkg bi --ignore-safety https://raw.githubusercontent.com/L0g0ff/3rd-party/master/network/download/synology-drive-client/pspec.xml
sudo eopkg it synology-drive-client*.eopkg;sudo rm synology-drive-client*.eopkg

Have fun!

504 Gateway Timeout on Synology NAS

Looks familiar?

I want to move this blog to my own NAS because I have plenty enough bandwidth and with Cloudflare as reverse proxy it is secure enough also (and I like to hobby off course 😉 ).

I tried to use the Duplicator WordPress plugin to make a dump of my website and do a restore in the Synology Webstation. But during the database restore in step 2 every time I was getting a 504 gateway timeout after a minute (exactly 60 seconds).

What I could have done was a manual (ftp & database) copy and restore the files. But I was sure I was getting other errors in the future when I had to update WordPress or other plugins. So fixing this timeout issue was the only solution.

On different places on the internet, I found that I had change the Nginx site settings. So I put the timeout settings in the associated “/etc/nginx/conf.d/site.conf” and restart Nginx and restore the database Nginx was still was failing.

    proxy_connect_timeout 600s;
    proxy_send_timeout 600s;
    proxy_read_timeout 600s;
    send_timeout 600s;

Then I’ll try to put these lines in the “/etc/nginx/nginx.conf” but when I restart the nginx the settings were overwritten and my changes are gone.

Every time you restart, Nginx Synology make use of a file “/usr/syno/share/nginx/nginx.mustache” to create a new nginx.conf file. I change the lines in that file and *boom* everything was working 🙂

So TLDR;

sudo su -
vim /usr/syno/share/nginx/nginx.mustache

#add these lines
    send_timeout                  600s;
    proxy_connect_timeout         600s;
    proxy_send_timeout            600s;
    proxy_read_timeout            600s;

synoservice --restart nginx

If you want to see the current Nginx config

nginx -T 

Have fun 🙂

Create Azure Linux VM with worpress pre-installed

This is my first completed automated Linux Azure VM deployment. I like to share it with you.

There are 3 parts

  1. Create a keygen for ssh
  2. Powershell script
  3. Bash script

First start powershell and create a keypair with passphase

ssh-keygen -m PEM -t rsa -b 4096

Then place the bash script somewhere on your local computer

#! /bin/bash
apt-get update
apt-get install -y wordpress php libapache2-mod-php mysql-server php-mysql

echo "Alias /blog /usr/share/wordpress" >>/etc/apache2/sites-available/wordpress.conf
echo "<Directory /usr/share/wordpress>" >>/etc/apache2/sites-available/wordpress.conf
echo "    Options FollowSymLinks" >>/etc/apache2/sites-available/wordpress.conf
echo "    AllowOverride Limit Options FileInfo" >>/etc/apache2/sites-available/wordpress.conf
echo "    DirectoryIndex index.php" >>/etc/apache2/sites-available/wordpress.conf
echo "    Order allow,deny" >>/etc/apache2/sites-available/wordpress.conf
echo "    Allow from all" >>/etc/apache2/sites-available/wordpress.conf
echo "</Directory>" >>/etc/apache2/sites-available/wordpress.conf
echo "<Directory /usr/share/wordpress/wp-content>" >>/etc/apache2/sites-available/wordpress.conf
echo "    Options FollowSymLinks" >>/etc/apache2/sites-available/wordpress.conf
echo "    Order allow,deny" >>/etc/apache2/sites-available/wordpress.conf
echo "    Allow from all" >>/etc/apache2/sites-available/wordpress.conf
echo "</Directory>" >>/etc/apache2/sites-available/wordpress.conf

a2ensite wordpress
a2enmod rewrite 
reload apache2 
service apache2 reload
systemctl restart apache2

mysql -e "CREATE DATABASE wordpress;"
mysql -e "CREATE USER wordpress@localhost IDENTIFIED BY 'Secret@Pass1';"
mysql -e "GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER ON wordpress.* TO wordpress@localhost;"
mysql -e "FLUSH PRIVILEGES;"

echo "<?php" >>/etc/wordpress/config-localhost.php
echo "define('DB_NAME', 'wordpress');">>/etc/wordpress/config-localhost.php
echo "define('DB_USER', 'wordpress');">>/etc/wordpress/config-localhost.php
echo "define('DB_PASSWORD', 'Secret@Pass1');">>/etc/wordpress/config-localhost.php
echo "define('DB_HOST', 'localhost');">>/etc/wordpress/config-localhost.php
echo "define('DB_COLLATE', 'utf8_general_ci');">>/etc/wordpress/config-localhost.php
echo "define('WP_CONTENT_DIR', '/usr/share/wordpress/wp-content');">>/etc/wordpress/config-localhost.php
echo "?>">>/etc/wordpress/config-localhost.php

service mysql start


publicip=$(dig +short myip.opendns.com @resolver1.opendns.com) && mv /etc/wordpress/config-localhost.php /etc/wordpress/config-$publicip.php

Then put the code in the Powershell ISE, change some variables and kickoff the script.

The things you may need to change:

  • script.sh location

New-AzResourceGroup -Name lxautodeploy -Location westeurope

# Create a subnet configuration
$subnetConfig = New-AzVirtualNetworkSubnetConfig `
  -Name "mySubnet" `
  -AddressPrefix 192.168.1.0/24

# Create a virtual network
$vnet = New-AzVirtualNetwork `
  -ResourceGroupName "lxautodeploy" `
  -Location "westeurope" `
  -Name "myVNET" `
  -AddressPrefix 192.168.0.0/16 `
  -Subnet $subnetConfig

# Create a public IP address and specify a DNS name
$pip = New-AzPublicIpAddress `
  -ResourceGroupName "lxautodeploy" `
  -Location "westeurope" `
  -AllocationMethod Static `
  -IdleTimeoutInMinutes 4 `
  -Name "mypublicdns$(Get-Random)"


# Create an inbound network security group rule for port 22
$nsgRuleSSH = New-AzNetworkSecurityRuleConfig `
  -Name "myNetworkSecurityGroupRuleSSH"  `
  -Protocol "Tcp" `
  -Direction "Inbound" `
  -Priority 1000 `
  -SourceAddressPrefix * `
  -SourcePortRange * `
  -DestinationAddressPrefix * `
  -DestinationPortRange 22 `
  -Access "Allow"

# Create an inbound network security group rule for port 80
$nsgRuleWeb = New-AzNetworkSecurityRuleConfig `
  -Name "myNetworkSecurityGroupRuleWWW"  `
  -Protocol "Tcp" `
  -Direction "Inbound" `
  -Priority 1001 `
  -SourceAddressPrefix * `
  -SourcePortRange * `
  -DestinationAddressPrefix * `
  -DestinationPortRange 80 `
  -Access "Allow"

# Create a network security group
$nsg = New-AzNetworkSecurityGroup `
  -ResourceGroupName "lxautodeploy" `
  -Location "westeurope" `
  -Name "myNetworkSecurityGroup" `
  -SecurityRules $nsgRuleSSH,$nsgRuleWeb

  # Create a virtual network card and associate with public IP address and NSG
$nic = New-AzNetworkInterface `
  -Name "myNic" `
  -ResourceGroupName "lxautodeploy" `
  -Location "westeurope" `
  -SubnetId $vnet.Subnets[0].Id `
  -PublicIpAddressId $pip.Id `
  -NetworkSecurityGroupId $nsg.Id

  # Define a credential object
$securePassword = ConvertTo-SecureString ' ' -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential ("azureuser", $securePassword)

# Create a virtual machine configuration
$vmConfig = New-AzVMConfig `
  -VMName "myLXVM" `
  -VMSize "Standard_D2s_v3" | `
Set-AzVMOperatingSystem `
  -Linux `
  -ComputerName "myLXVM" `
  -Credential $cred `
  -DisablePasswordAuthentication | `
Set-AzVMSourceImage `
  -PublisherName "Canonical" `
  -Offer "UbuntuServer" `
  -Skus "18.04-LTS" `
  -Version "latest" | `
Add-AzVMNetworkInterface `
  -Id $nic.Id

# Configure the SSH key
$sshPublicKey = cat ~/.ssh/id_rsa.pub
Add-AzVMSshPublicKey `
  -VM $vmconfig `
  -KeyData $sshPublicKey `
  -Path "/home/azureuser/.ssh/authorized_keys"

New-AzVM `
  -ResourceGroupName "lxautodeploy" `
  -Location westeurope -VM $vmConfig

Get-AzPublicIpAddress -ResourceGroupName "lxautodeploy" | Select "IpAddress"



Invoke-AzVMRunCommand -ResourceGroupName "lxautodeploy" -Name 'myLXVM' -CommandId 'RunShellScript' -ScriptPath "script.sh" -Verbose

Now you can go to http://<publicip>/blog to access the new blog

You can access the server with ssh azureuser@<publicip>

Have fun with it!

Install Pihole on Synology with docker

Unfortunately, there isn’t a pihole addon in the Synology package center. But you can build your pihole in a docker container instead 🙂

The reason you must use docker-compose instead of the Synology docker package itself is that you want to bridge net NIC of your Synology and place the pihole direct in your network. You cannot do this with the GUI.

The steps:

  • Install docker with the package center
  • Activate SSH
  • Download de image pihole/pihole:latest
  • Login with ssh
  • type vi docker-compose.yaml
  • Paste the content from the docker-compose.yaml example into the vi
  • Change the IP adressen to your own network
  • Type :wr to save the file
  • Type :q to quit vi
  • Type “sudo docker-compose up”
  • Have fun!

Docker-compose.yaml Example

# Note: 192.168.123.xxx is an example network, you must update all these to match your own.

version: '2'

services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    hostname: pihole
    domainname: localhost             # <-- Update
    mac_address: d0:ca:ab:cd:ef:01
    cap_add:
      - NET_ADMIN
    networks:
      pihole_network:
        ipv4_address: 192.168.123.199   # <-- Update
    dns:
      - 127.0.0.1
      - 1.1.1.1
    ports:
      - 443/tcp
      - 53/tcp
      - 53/udp
      - 67/udp
      - 80/tcp
    environment:
      ServerIP: 192.168.123.199                 # <-- Update (match ipv4_address)
      VIRTUAL_HOST: pihole.localhost            # <-- Update (match hostname + domainname)
      WEBPASSWORD: "justarondompassword"        # <-- Add password (if required)
    restart: unless-stopped

networks:
  pihole_network:
    driver: macvlan
    driver_opts:
      parent: ovs_eth0
    ipam:
      config:
        - subnet: 192.168.123.0/24            # <-- Update
          gateway: 192.168.123.1              # <-- Update
          ip_range: 192.168.123.192/28        # <-- Update

When you want to update the docker container, all you have to do is:

sudo docker-compose down

and

sudo docker-compose up

A good article I used to figure everything out is: http://tonylawrence.com/posts/unix/synology/free-your-synology-ports/

Ubuntu Linux cannot ping FQDN

Because this is the fifth time I fixed this issue I write a blog about it…

Microsoft uses .local as the recommended root of internal domains, and serves them via unicast dns. Linux uses .local as the root of multicast dns. If you’re stuck on a broken MS network like this, reconfigure your linux multicast DNS to use a different domain like .alocal.

To do this, add a domain-name=.alocal line to the [server] section of /etc/avahi/avahi-daemon.conf, then restart avahi-daemon: sudo service avahi-daemon restart.

[server]
domain-name=.alocal

You may need to flush the DNS, mDNS and resolver cache, as well as restart your web browsers to clear their internal cache.

Source: http://www.lowlevelmanager.com/2011/09/fix-linux-dns-issues-with-local.html

Convert PFX to PEM and upload the certificate to Plesk

Export the Private Key:

# openssl pkcs12 -in filename.pfx -nocerts -out key.pem

Remove the password from the SSL certificate (unencrypted is needed for plesk):

# openssl rsa -in key.pem -out server.key

Export the certificate:

# openssl pkcs12 -in filename.pfx -clcerts -nokeys -out cert.pem

Now upload the certificate:

ssl-thawte

And bind the certificate in your hosting settings:

SSL-PII