From bd86f4fc950cdc5bb4cb346f48c14a6e356dc4fb Mon Sep 17 00:00:00 2001 From: David Luevano Alvarado Date: Thu, 7 Mar 2024 21:55:16 -0600 Subject: stop tracking live/ --- live/blog/a/acomodada_la_pagina_de_arte.html | 151 ----- live/blog/a/al_fin_tengo_fibra_opticona.html | 152 ----- live/blog/a/arch_logs_flooding_disk.html | 189 ------ live/blog/a/asi_nomas_esta_quedando.html | 154 ----- live/blog/a/devs_android_me_trozaron.html | 162 ----- live/blog/a/el_blog_ya_tiene_timestamps.html | 155 ----- live/blog/a/first_blog_post.html | 147 ----- live/blog/a/git_server_with_cgit.html | 287 --------- live/blog/a/hoy_toco_desarrollo_personaje.html | 160 ----- .../blog/a/jellyfin_server_with_sonarr_radarr.html | 686 --------------------- live/blog/a/learned_go_and_lua_hard_way.html | 159 ----- live/blog/a/mail_server_with_postfix.html | 527 ---------------- live/blog/a/manga_server_with_komga.html | 539 ---------------- live/blog/a/new_blogging_system.html | 156 ----- .../a/password_manager_authenticator_setup.html | 160 ----- live/blog/a/pastebin_alt_with_privatebin.html | 401 ------------ live/blog/a/rewrote_pyssg_again.html | 152 ----- live/blog/a/tenia_esto_descuidado.html | 154 ----- live/blog/a/torrenting_with_qbittorrent.html | 411 ------------ live/blog/a/updated_pyssg_pymdvar_and_website.html | 152 ----- .../updating_creating_entries_titles_to_setup.html | 149 ----- live/blog/a/volviendo_a_usar_la_pagina.html | 152 ----- live/blog/a/vpn_server_with_openvpn.html | 446 -------------- live/blog/a/website_with_nginx.html | 284 --------- live/blog/a/xmpp_server_with_prosody.html | 665 -------------------- 25 files changed, 6750 deletions(-) delete mode 100644 live/blog/a/acomodada_la_pagina_de_arte.html delete mode 100644 live/blog/a/al_fin_tengo_fibra_opticona.html delete mode 100644 live/blog/a/arch_logs_flooding_disk.html delete mode 100644 live/blog/a/asi_nomas_esta_quedando.html delete mode 100644 live/blog/a/devs_android_me_trozaron.html delete mode 100644 live/blog/a/el_blog_ya_tiene_timestamps.html delete mode 100644 live/blog/a/first_blog_post.html delete mode 100644 live/blog/a/git_server_with_cgit.html delete mode 100644 live/blog/a/hoy_toco_desarrollo_personaje.html delete mode 100644 live/blog/a/jellyfin_server_with_sonarr_radarr.html delete mode 100644 live/blog/a/learned_go_and_lua_hard_way.html delete mode 100644 live/blog/a/mail_server_with_postfix.html delete mode 100644 live/blog/a/manga_server_with_komga.html delete mode 100644 live/blog/a/new_blogging_system.html delete mode 100644 live/blog/a/password_manager_authenticator_setup.html delete mode 100644 live/blog/a/pastebin_alt_with_privatebin.html delete mode 100644 live/blog/a/rewrote_pyssg_again.html delete mode 100644 live/blog/a/tenia_esto_descuidado.html delete mode 100644 live/blog/a/torrenting_with_qbittorrent.html delete mode 100644 live/blog/a/updated_pyssg_pymdvar_and_website.html delete mode 100644 live/blog/a/updating_creating_entries_titles_to_setup.html delete mode 100644 live/blog/a/volviendo_a_usar_la_pagina.html delete mode 100644 live/blog/a/vpn_server_with_openvpn.html delete mode 100644 live/blog/a/website_with_nginx.html delete mode 100644 live/blog/a/xmpp_server_with_prosody.html (limited to 'live/blog/a') diff --git a/live/blog/a/acomodada_la_pagina_de_arte.html b/live/blog/a/acomodada_la_pagina_de_arte.html deleted file mode 100644 index 18482de..0000000 --- a/live/blog/a/acomodada_la_pagina_de_arte.html +++ /dev/null @@ -1,151 +0,0 @@ - - - - - - -Al fin ya me acomodé la página pa' los dibujos -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Al fin ya me acomodé la página pa' los dibujos

- -

Así es, ya quedó acomodado el sub-dominio art.luevano.xyz pos pal arte veda. Entonces pues ando feliz por eso.

-

Este pedo fue gracias a que me reescribí la forma en la que pyssg maneja los templates, ahora uso el sistema de jinja en vez del cochinero que hacía antes.

-

Y pues nada más eso, aquí está el primer post y por supuesto acá está el link del RSS https://art.luevano.xyz/rss.xml.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/al_fin_tengo_fibra_opticona.html b/live/blog/a/al_fin_tengo_fibra_opticona.html deleted file mode 100644 index d62f298..0000000 --- a/live/blog/a/al_fin_tengo_fibra_opticona.html +++ /dev/null @@ -1,152 +0,0 @@ - - - - - - -Al fin tengo fibra ópticona -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Al fin tengo fibra ópticona

- -

Quienes me conocen sabrán que llevo como 2 años intentando contratar internet de fibra óptica (específicamente el de T*lm*x). El problema es que nunca había nodos/terminales disponibles o, la verdad, que los técnicos ni querían hacer su jale porque están acostumbrados a que les debes soltar una feria para que te la instalen.

-

Pues bueno, el punto es que me tocó estar aguantando la compañía horrible de *zz*, que sólo tiene cobre; el servicio es malo y a cada rato le suben de precio. Por esto último volví a checar precios de otras compañías para comparar y resulta que me estaban cobrando como $100 - $150 pesos extra con el mismo paquete que ya tenía/tengo. Hasta ahí estaba encabronado, y no ayudó nada que intenté hablar con los muy incompetentes de soporte y no pudieron digamos “resolverme”, porque ¿cómo es posible que siendo cliente de como 5 años ni si quiera pueden avisarme que ya tienen mejores paquetes (que la neta es el mismo paquete pero más barato)?

-

Intenté pedirles que me cambien al paquete actual (mismo todo, única diferencia el precio), pero resulta que me meterían a plazo forzoso. Obviamente esto me prendió un cuete en la cola y por eso chequé con T*lm*x, que a mi sorpresa salía que sí había fibra óptica disponible en mi cantón. Inicié el proceso de portabilidad y me dijeron que en como dos semanas me la instalaban, pero resulta que el basado del técnico me marcó al día siguiente para decirme que YA ESTABA AFUERA DE MI CASA para instalarlo. Gané.

-

Resulta que ahora sí hay nodos/terminales, de hecho instalaron 3 nuevos y están completamente vacíos, me tocó muy buena suerte y el muy basado del técnico se lo aventó en medio segundo sin ningún pedo, no me pidió nada más que detalles de dónde quería el módem. No tenía efectivo si no le soltaba un varo, se portó muy chingón.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/arch_logs_flooding_disk.html b/live/blog/a/arch_logs_flooding_disk.html deleted file mode 100644 index eb2c835..0000000 --- a/live/blog/a/arch_logs_flooding_disk.html +++ /dev/null @@ -1,189 +0,0 @@ - - - - - - -Configure system logs on Arch to avoid filled up disk -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Configure system logs on Arch to avoid filled up disk

- -

It’s been a while since I’ve been running a minimal server on a VPS, and it is a pretty humble VPS with just 32 GB of storage which works for me as I’m only hosting a handful of services. At some point I started noticing that the disk keept filling up on each time I checked.

-

Turns out that out of the box, Arch has a default config for systemd‘s journald that keeps a persistent journal log, but doesn’t have a limit on how much logging is kept. This means that depending on how many services, and how aggresive they log, it can be filled up pretty quickly. For me I had around 15 GB of logs, from the normal journal directory, nginx directory and my now unused prosody instance.

-

For prosody it was just a matter of deleting the directory as I’m not using it anymore, which freed around 4 GB of disk space. -For journal I did a combination of configuring SystemMaxUse and creating a Namespace for all “email” related services as mentioned in the Arch wiki: systemd/Journal; basically just configuring /etc/systemd/journald.conf (and /etc/systemd/journald@email.conf with the comment change) with:

-
[Journal]
-Storage=persistent
-SystemMaxUse=100MB # 50MB for the "email" Namespace
-
-

And then for each service that I want to use this “email” Namespace I add:

-
[Service]
-LogNamespace=email
-
-

Which can be changed manually or by executing systemctl edit service_name.service and it will create an override file which will be read on top of the normal service configuration. Once configured restart by running systemctl daemon-reload and systemctl restart service_name.service (probably also restart systemd-journald).

-

I also disabled the logging for ufw by running ufw logging off as it logs everything to the kernel “unit”, and I didn’t find a way to pipe its logs to a separate directory. It really isn’t that useful as most of the logs are the normal [UFW BLOCK] log, which is normal. If I need debugging then I’ll just enable that again. Note that you can change the logging level, if you still want some kind of logging.

-

Finally to clean up the nginx logs, you need to install logrotate (pacman -S logrotate) as that is what is used to clean up the nginx log directory. nginx already “installs” a config file for logrotate which is located at /etc/logrotate.d/, I just added a few lines:

-
/var/log/nginx/*log {
-    rotate 7
-    weekly
-    dateext
-    dateformat -%Y-%m-%d
-    missingok
-    notifempty
-    create 640 http log
-    sharedscripts
-    compress
-    postrotate
-        test ! -r /run/nginx.pid || kill -USR1 `cat /run/nginx.pid`
-    endscript
-}
-
-

Once you’re ok with your config, it’s just a matter of running logrotate -v -f /etc/logrotate.d/nginx which forces the run of the rule for nginx. After this, logrotate will be run daily if you enable the logrotate timer: systemctl enable logrotate.timer.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/asi_nomas_esta_quedando.html b/live/blog/a/asi_nomas_esta_quedando.html deleted file mode 100644 index b3793a2..0000000 --- a/live/blog/a/asi_nomas_esta_quedando.html +++ /dev/null @@ -1,154 +0,0 @@ - - - - - - -Así nomás está quedando el página -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Así nomás está quedando el página

- -

Estuve acomodando un poco más el sItIo, al fin agregué la “sección” de contact y de donate por si hay algún loco que quiere tirar varo.

-

También me puse a acomodar un servidor de XMPP el cual, en pocas palabras, es un protocolo de mensajería instantánea (y más) descentralizado, por lo cual cada quien puede hacer una cuenta en el servidor que quiera y conectarse con cuentas creadas en otro servidor… exacto, como con los correos electrónicos. Y esto está perro porque si tú tienes tu propio server, así como con uno de correo electrónico, puedes controlar qué características tiene, quiénes pueden hacer cuenta, si hay end-to-end encryption (o mínimo end-to-server), entre un montón de otras cosas.

-

Ahorita este server es SUMISO (compliant en español, jeje) para jalar con la app conversations y con la red social movim, pero realmente funcionaría con casi cualquier cliente de XMPP, amenos que ese cliente implemente algo que no tiene mi server. Y también acomodé un server de Matrix que es muy similar pero es bajo otro protocolo y se siente más como un discord/slack (al menos en el element), muy chingón también.

-

Si bien aún quedan cosas por hacer sobre estos dos servers que me acomodé (además de hacerles unas entradas para documentar cómo lo hice), quiero moverme a otra cosa que sería acomodar una sección de dibujos, lo cual en teoría es bien sencillo, pero como quiero poder automatizar la publicación de estos, quiero modificar un poco el pyssg para que jale chido para este pex.

-

Ya por último también quiero moverle un poco al CSS, porque lo dejé en un estado muy culerón y quiero meterle/ajustar unas cosas para que quede más limpio y medianamente bonito… dentro de lo que cabe porque evidentemente me vale verga si se ve como una página del 2000.

-

Actualización: Ya tumbé el servidor de XMPP porque consumía bastantes recursos y no lo usaba tanto, si en un futuro consigo un mejor servidor podría volver a hostearlo.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/devs_android_me_trozaron.html b/live/blog/a/devs_android_me_trozaron.html deleted file mode 100644 index 0821362..0000000 --- a/live/blog/a/devs_android_me_trozaron.html +++ /dev/null @@ -1,162 +0,0 @@ - - - - - - -Los devs de Android/MIUI me trozaron -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Los devs de Android/MIUI me trozaron

- -

Llevo dos semanas posponiendo esta entrada porque andaba bien enojado (todavía, pero ya se anda pasando) y me daba zzz. Pero bueno, antes que nada este pex ocupa un poco de contexto sobre dos cositas:

- -

Ahora sí vamonos por partes, primero que nada lo que sucedió fue que ordené una mSD con más capacidad que la que ya tenía (64 GB -> 512 GB, poggies), porque últimamente he estado bajando y leyendo mucho manga entonces me estaba quedando sin espacio. Ésta llegó el día de mi cumpleaños lo cuál estuvo chingón, me puse a hacer backup de la mSD que ya tenía y preparando todo, muy bonito, muy bonito.

-

Empecé a tener problemas, porque al estar moviendo tanto archivo pequeño (porque recordemos que el tachiyomi trata a cada página como una sola imagen), la conexión entre el celular y mi computadora se estaba corte y corte por alguna razón; en general muchos pedos. Por lo que mejor le saqué la nueva mSD y la metí directo a mi computadora por medio de un adaptador para batallar menos y que fuera más rápido.

-

Hacer este pedo de mover archivos directamente en la mSD puede llevar a corromper la memoria, no se los detalles pero pasa (o quizá estoy meco e hice algo mal). Por lo que al terminar de mover todo a la nueva mSD y ponerla en el celular, éste se emputó que porque no la detectaba y que quería tirar un formateo a la mSD. A este punto no me importaba mucho, sólo era questión de volvera mover archivos y ser más cuidadoso; “no issues from my end” diría en mis standups.

-

Todo valió vergota porque en cierto punto al elegir sí formatear la mSD mi celular me daba la opción de “usar la micro SD para el celular” o “usar la micro SD como memoria portátil” (o algo entre esas líneas), y yo, estúpidamente, elegí la primera, porque me daba sentido: “no, pues simón, voy a usar esta memoria para este celular”.

-

Pues mamé, resulta que esa primera opción lo que realmente quería decir es que se iba a usar la micro SD como interna usando el pex este de adoptable storage. Entonces básicamente perdí mi capacidad de memoria interna (128 GB aprox.), y toda la mSD nueva se usó como memoria interna. Todo se juntó, si intentaba sacar la mSD todo se iba a la mierda y no podía usar muchas aplicaciones. “No hay pedo”, pensé, “nada más es cuestión de desactivar esta mamada de adoptable storage”.

-

Ni madres dijeron los devs de Android, este pedo nada más es un one-way: puedes activar adoptable storage pero para desactivarlo ocupas, a huevo, formatear tu celular a estado de fábrica. Chingué a mi madre, comí mierda, perdí.

-

Pues eso fue lo que hice, ni modo. Hice backup de todo lo que se me ocurrió (también me di cuenta que G**gl* authenticator es cagada ya que no te deja hacer backup, entre otras cosas, mejor usen Aegis authenticator), desactivé todo lo que se tenía que desactivar y tocó hacer factory reset, ni modo. Pero como siempre las cosas salen mal y tocó comer mierda del banco porque me bloquearon la tarjeta, perdí credenciales necesarias para el trabajo (se resolvió rápido), etc., etc.. Ya no importa, ya casi todo está resuelto, sólo queda ir al banco a resolver lo de la tarjeta bloqueada (esto es para otro rant, pinches apps de bancos piteras, ocupan hacer una sola cosa y la hacen mal).

-

Al final del día, la causa del problema fueron los malditos mangas (por andar queriendo backupearlos), que terminé bajando de nuevo manualmente y resultó mejor porque aparentemente tachiyomi agregó la opción de “zippear” los mangas en formato CBZ, por lo que ya son más fácil de mover de un lado para otro, el fono no se queda pendejo, etc., etc..

-

Por último, quiero decir que los devs de Android son unos pendejos por no hacer reversible la opción de adoptable storage, y los de MIUI son todavía más por no dar detalles de lo que significan sus opciones de formateo, especialmente si una opción es tan chingadora que para revertirla necesitas formatear a estado de fábrica tu celular; más que nada es culpa de los de MIUI, todavía que ponen un chingo de A(i)DS en todas sus apps, no pueden poner una buena descripción en sus opciones. REEEE.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/el_blog_ya_tiene_timestamps.html b/live/blog/a/el_blog_ya_tiene_timestamps.html deleted file mode 100644 index bc10bf1..0000000 --- a/live/blog/a/el_blog_ya_tiene_timestamps.html +++ /dev/null @@ -1,155 +0,0 @@ - - - - - - -Así es raza, el blog ya tiene timestamps -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Así es raza, el blog ya tiene timestamps

- -

Pues eso, esta entrada es sólo para tirar update sobre mi primer post. Ya modifiqué el ssg lo suficiente como para que maneje los timestamps, y ya estoy más familiarizado con este script entonces ya lo podré extender más, pero por ahora las entradas ya tienen su fecha de creación (y modificación en dado caso) al final y en el índice ya están organizados por fecha, que por ahora está algo simple pero está sencillo de extender.

-

Ya lo único que queda es cambiar un poco el formato del blog (y de la página en general), porque en un momento de desesperación puse todo el texto en justificado y pues no se ve chido siempre, entonces queda corregir eso. Y aunque me tomó más tiempo del que quisiera, así nomás quedó, diría un cierto personaje.

-

El ssg modificado está en mis dotfiles (o directamente aquí). -Como al final ya no usé el ssg modificado, este pex ya no existe.

-

Por último, también quité las extensiones .html de las URLs, porque se ve bien pitero, pero igual los links con .html al final redirigen a su link sin .html, así que no hay rollo alguno.

-

Actualización: Ahora estoy usando mi propia solución en vez de ssg, que la llamé pyssg, de la cual empiezo a hablar acá.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/first_blog_post.html b/live/blog/a/first_blog_post.html deleted file mode 100644 index 314abb1..0000000 --- a/live/blog/a/first_blog_post.html +++ /dev/null @@ -1,147 +0,0 @@ - - - - - - -This is the first blog post, just for testing purposes -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

This is the first blog post, just for testing purposes

- -

I’m making this post just to figure out how ssg5 and lowdown are supposed to work, and eventually rssg.

-

At the moment I’m not satisfied because there’s no automatic date insertion into the 1) html file, 2) the blog post itself and 3) the listing system in the blog homepage which also has a problem with the ordering of the entries. And all of this just because I didn’t want to use Luke’s lb solution as I don’t really like that much how he handles the scripts (but they just work).

-

Hopefully, for tomorrow all of this will be sorted out and I’ll have a working blog system.

-

Update: I’m now using my own solution which I called pyssg, of which I talk about here.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/git_server_with_cgit.html b/live/blog/a/git_server_with_cgit.html deleted file mode 100644 index 605d2bf..0000000 --- a/live/blog/a/git_server_with_cgit.html +++ /dev/null @@ -1,287 +0,0 @@ - - - - - - -Set up a Git server and cgit front-end -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Set up a Git server and cgit front-end

- -

My git server is all I need to setup to actually kill my other server (I’ve been moving from servers on these last 2-3 blog entries), that’s why I’m already doing this entry. I’m basically following git’s guide on setting up a server plus some specific stuff for btw i use Arch Linux (Arch Linux Wiki: Git server and Step by step guide on setting up git server in arch linux (pushable)).

-

Note that this is mostly for personal use, so there’s no user/authentication control other than that of normal ssh. And as with the other entries, most if not all commands here are run as root unless stated otherwise.

-

Table of contents

- -

Prerequisites

-

I might get tired of saying this (it’s just copy paste, basically)… but you will need the same prerequisites as before (check my website and mail entries), with the extras:

- -

Git

-

Git is a version control system.

-

If not installed already, install the git package:

-
pacman -S git
-
-

On Arch Linux, when you install the git package, a git user is automatically created, so all you have to do is decide where you want to store the repositories, for me, I like them to be on /home/git like if git was a “normal” user. So, create the git folder (with corresponding permissions) under /home and set the git user’s home to /home/git:

-
mkdir /home/git
-chown git:git /home/git
-usermod -d /home/git git
-
-

Also, the git user is “expired” by default and will be locked (needs a password), change that with:

-
chage -E -1 git
-passwd git
-
-

Give it a strong one and remember to use PasswordAuthentication no for ssh (as you should). Create the .ssh/authorized_keys for the git user and set the permissions accordingly:

-
mkdir /home/git/.ssh
-chmod 700 /home/git/.ssh
-touch /home/git/.ssh/authorized_keys
-chmod 600 /home/git/.ssh/authorized_keys
-chown -R git:git /home/git
-
-

Now is a good idea to copy over your local SSH public keys to this file, to be able to push/pull to the repositories. Do it by either manually copying it or using ssh‘s built in ssh-copy-id (for that you may want to check your ssh configuration in case you don’t let people access your server with user/password).

-

Next, and almost finally, we need to edit the git-daemon service, located at /usr/lib/systemd/system/ (called git-daemon@.service):

-
...
-ExecStart=-/usr/lib/git-core/git-daemon --inetd --export-all --base-path=/home/git --enable=receive-pack
-...
-
-

I just appended --enable=receive-pack and note that I also changed the --base-path to reflect where I want to serve my repositories from (has to match what you set when changing git user’s home).

-

Now, go ahead and start and enable the git-daemon socket:

-
systemctl start git-daemon.socket
-systemctl enable git-daemon.socket
-
-

You’re basically done. Now you should be able to push/pull repositories to your server… except, you haven’t created any repository in your server, that’s right, they’re not created automatically when trying to push. To do so, you have to run (while inside /home/git):

-
git init --bare {repo_name}.git
-chown -R git:git {repo_name}.git
-
-

Those two lines above will need to be run each time you want to add a new repository to your server. There are options to “automate” this but I like it this way.

-

After that you can already push/pull to your repository. I have my repositories (locally) set up so I can push to more than one remote at the same time (my server, GitHub, GitLab, etc.); to do so, check this gist.

-

Cgit

-

Cgit is a fast web interface for git.

-

This is optionally since it’s only for the web application.

-

Install the cgit and fcgiwrap packages:

-
pacman -S cgit fcgiwrap
-
-

Now, just start and enable the fcgiwrap socket:

-
systemctl start fcgiwrap.socket
-systemctl enable fcgiwrap.socket
-
-

Next, create the git.conf as stated in my nginx setup entry. Add the following lines to your git.conf file:

-
server {
-    listen 80;
-    listen [::]:80;
-    root /usr/share/webapps/cgit;
-    server_name {yoursubdomain}.{yourdomain};
-    try_files $uri @cgit;
-
-    location @cgit {
-        include fastcgi_params;
-        fastcgi_param SCRIPT_FILENAME $document_root/cgit.cgi;
-        fastcgi_param PATH_INFO $uri;
-        fastcgi_param QUERY_STRING $args;
-        fastcgi_param HTTP_HOST $server_name;
-        fastcgi_pass unix:/run/fcgiwrap.sock;
-    }
-}
-
-

Where the server_name line depends on you, I have mine setup to git.luevano.xyz and www.git.luevano.xyz. Optionally run certbot --nginx to get a certificate for those domains if you don’t have already.

-

Now, all that’s left is to configure cgit. Create the configuration file /etc/cgitrc with the following content (my personal options, pretty much the default):

-
css=/cgit.css
-logo=/cgit.png
-
-enable-http-clone=1
-# robots=noindex, nofollow
-virtual-root=/
-
-repo.url={url}
-repo.path={dir_path}
-repo.owner={owner}
-repo.desc={short_description}
-...
-
-

Where you can uncomment the robots line to not let web crawlers (like Google’s) to index your git web app. And at the end keep all your repositories (the ones you want to make public), for example for my dotfiles I have:

-
...
-repo.url=.dots
-repo.path=/home/git/.dots.git
-repo.owner=luevano
-repo.desc=These are my personal dotfiles.
-...
-
-

Otherwise you could let cgit to automatically detect your repositories (you have to be careful if you want to keep “private” repos) using the option scan-path and setup .git/description for each repository. For more, you can check cgitrc(5).

-

Cgit’s file rendering

-

By default you can’t see the files on the site, you need a highlighter to render the files, I use highlight. Install the highlight package:

-
pacman -S highlight
-
-

Copy the syntax-highlighting.sh script to the corresponding location (basically adding -edited to the file):

-
cp /usr/lib/cgit/filters/syntax-highlighting.sh /usr/lib/cgit/filters/syntax-highlighting-edited.sh
-
-

And edit it to use the version 3 and add --inline-css for more options without editing cgit‘s CSS file:

-
...
-# This is for version 2
-# exec highlight --force -f -I -X -S "$EXTENSION" 2>/dev/null
-
-# This is for version 3
-exec highlight --force --inline-css -f -I -O xhtml -S "$EXTENSION" 2>/dev/null
-...
-
-

Finally, enable the filter in /etc/cgitrc configuration:

-
source-filter=/usr/lib/cgit/filters/syntax-highlighting-edited.sh
-
-

That would be everything. If you need support for more stuff like compressed snapshots or support for markdown, check the optional dependencies for cgit.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/hoy_toco_desarrollo_personaje.html b/live/blog/a/hoy_toco_desarrollo_personaje.html deleted file mode 100644 index d66c4cd..0000000 --- a/live/blog/a/hoy_toco_desarrollo_personaje.html +++ /dev/null @@ -1,160 +0,0 @@ - - - - - - -Hoy me tocó desarrollo de personaje -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Hoy me tocó desarrollo de personaje

- -

Sabía que hoy no iba a ser un día tan bueno, pero no sabía que iba a estar tan horrible; me tocó desarrollo de personaje y saqué el bad ending.

-

Básicamente tenía que cumplir dos misiones hoy: ir al banco a un trámite y vacunarme contra el Covid-19. Muy sencillas tareas.

-

Primero que nada me levanté de una pesadilla horrible en la que se puede decir que se me subió el muerto al querer despertar, esperé a que fuera casi la hora de salida de mi horario de trabajo, me bañé y fui directo al banco primero. Todo bien hasta aquí.

-

En el camino al banco, durante la plática con el conductor del Uber salió el tema del horario del banco. Yo muy tranquilo dije “pues voy algo tarde, pero sí alcanzo, cierran a las 5, ¿no?” a lo que me respondió el conductor “nel jefe, a las 4, y se van media hora antes”; quedé. Chequé y efectivamente cerraban a las 4. Entonces le dije que le iba a cambiar la ruta directo a donde me iba a vacunar, pero ya era muy tarde y quedaba para la dirección opuesta.”Ni pedo, ahí déjame y pido otro viaje, no te apures”, le dije y como siempre pues me deseó que se compusiera mi día; afortunadamente el banco sí estaba abierto para lo que tenía que hacer, así que fue un buen giro. Me puse muy feliz y asumí que sería un buen día, como me lo dijo mi conductor; literalmente NO SABÍA.

-

Salí feliz de poder haber completado esa misión y poder irme a vacunar. Pedí otro Uber a donde tenía que ir y todo bien. Me tocó caminar mucho porque la entrada estaba en punta de la chingada de donde me dejó el conductor, pero no había rollo, era lo de menos. Me desanimé cuando vi que había una cantidad estúpida de gente, era una fila que abarcaba todo el estacionamiento y daba demasiadas vueltas; “ni pedo”, dije, “si mucho me estaré aquí una hora, hora y media”… otra vez, literalmente NO SABÍA.

-

Pasó media hora y había avanzado lo que parecía ser un cuarto de la fila, entonces todo iba bien. Pues nel, había avanzado el equivalente a un octavo de la fila, este pedo no iba a salir en una hora-hora y media. Para acabarla de chingar era todo bajo el tan amado sol de Chiwawa. “No hay pedo, me entretengo tirando chal con alguien en el wasap”, pues no, aparentemente no cargué el celular y ya tenía 15-20% de batería… volví a quedar.

-

Se me acabó la pila, ya había pasado una hora y parecía que la fila era infinita, simplemente avanzábamos demasiado lento, a pesar de que los que venían atrás de mí repetían una y otra vez “mira, avanza bien rápido, ya mero llegamos”, ilusos. Duré aproximadamente 3 horas formado, aguantando conversaciones estúpidas a mi alrededor, gente quejándose por estar parada (yo también me estaba quejando pero dentro de mi cabeza), y por alguna razón iban familias completas de las cuales al final del día sólo uno o dos integrantes de la familia entraban a vacunarse.

-

En fin que se acabó la tortura y ya tocaba irse al cantón, todo bien. “No hay pedo, no me tocó irme en Uber, aquí agarro un camíon” pensé. Pero no, ningún camión pasó durante la hora que estuve esperando y de los 5 taxis que intenté parar NINGUNO se detuvo. Decidí irme caminado, ya qué más daba, en ese punto ya nada más era hacer corajes dioquis.

-

En el camino vi un Oxxo y decidí desviarme para comprar algo de tomar porque andaba bien deshidratado. En el mismo segundo que volteé para ir hacia el Oxxo pasó un camión volando y lo único que pensaba era que el conductor me decía “Jeje ni pedo:)”. Exploté, me acabé, simplemente perdí, saqué el bad ending.

-

Ya estaba harto y hasta iba a comprar un cargador para ya irme rápido, estaba cansado del día, simplemente ahí terminó la quest, había sacado el peor final. Lo bueno es que se me ocurrió pedirle al cajero un cargador y que me tirara paro. Todo bien, pedí mi Uber y llegué a mi casa sano y a salvo, pero con la peor rabia que me había dado en mucho tiempo. Simplemente ¿mi culo? explotado. Este día me tocó un desarrollo de personaje muy cabrón, se mamó el D*****o.

-

Lo único rescatable fue que había una (más bien como 5) chica muy guapa en la fila, lástima que los stats de mi personaje me tienen bloqueadas las conversaciones con desconocidos.

-

Y pues ya, este pex ya me sirvió para desahogarme, una disculpa por la redacción tan pitera. Sobres.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/jellyfin_server_with_sonarr_radarr.html b/live/blog/a/jellyfin_server_with_sonarr_radarr.html deleted file mode 100644 index 44f0d0a..0000000 --- a/live/blog/a/jellyfin_server_with_sonarr_radarr.html +++ /dev/null @@ -1,686 +0,0 @@ - - - - - - -Set up a media server with Jellyfin, Sonarr and Radarr -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Set up a media server with Jellyfin, Sonarr and Radarr

- -

Second part of my self hosted media server. This is a direct continuation of Set up qBitTorrent with Jackett for use with Starr apps, which will be mentioned as “first part” going forward. Sonarr, Radarr, Bazarr (Starr apps) and Jellyfin setups will be described in this part. Same introduction applies to this entry, regarding the use of documentation and configuration.

-

Everything here is performed in arch btw and all commands should be run as root unless stated otherwise.

-

Kindly note that I do not condone the use of BitTorrent for illegal activities. I take no responsibility for what you do when setting up anything shown here. It is for you to check your local laws before using automated downloaders such as Sonarr and Radarr.

-

Table of contents

- -

Prerequisites

-

Same prerequisites as with the First part: Prerequisites plus:

- -

The First part: Directory structure is the same here. The servarr user and group should be available, too.

-

It is assumed that the first part was followed.

-

Radarr

-

Radarr is a movie collection manager that can be used to download movies via torrents. This is actually a fork of Sonarr, so they’re pretty similar, I just wanted to set up movies first.

-

Install from the AUR with yay:

-
yay -S radarr
-
-

Add the radarr user to the servarr group:

-
gpasswd -a radarr servarr
-
-

The default port that Radarr uses is 7878 for http (the one you need for the reverse proxy).

-

Reverse proxy

-

Add the following location blocks into the isos.conf with whatever subdirectory name you want, I’ll leave it as radarr:

-
location /radarr/ {
-    proxy_pass http://127.0.0.1:7878/radarr/; # change port if needed
-    proxy_http_version 1.1;
-
-    proxy_set_header Host $host;
-    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-    proxy_set_header X-Forwarded-Host $host;
-    proxy_set_header X-Forwarded-Proto $scheme;
-    proxy_set_header Upgrade $http_upgrade;
-    proxy_set_header Connection $http_connection;
-
-    proxy_redirect off;
-}
-# Allow the API External Access via NGINX
-location /radarr/api {
-    auth_basic off;
-    proxy_pass http://127.0.0.1:7878/radarr/api; # change port if needed
-}
-
-

This is taken from Radarr Nginx reverse proxy configuration. Restart the nginx service for the changes to take effect:

-
systemctl restart nginx.service
-
-

Start using Radarr

-

You can now start/enable the radarr.service:

-
systemctl enable radarr.service
-systemctl start radarr.service
-
-

This will start the service and create the default configs under /var/lib/radarr. You need to change the URLBase as the reverse proxy is under a subdirectory (/radarr). Edit /var/lib/radarr/config.xml:

-
...
-<UrlBase>/radarr</UrlBase>
-...
-
-

Then restart the radarr service:

-
systemctl restart radarr.service
-
-

Now https://isos.yourdomain.com/radarr is accessible. Secure the instance right away by adding authentication under Settings -> General -> Security. I added the “Forms” option, just fill in the username and password then click on save changes on the top left of the page. You can restart the service again and check that it asks for login credentials.

-

Note that if you want to have an anime movies library, it is recommended to run a second instance of Radarr for this as shown in Radarr: Linux multiple instances and follow TRaSH: How to setup quality profiles anime if an anime instance is what you want.

-

Configuration

-

Will be following the official Radarr: Quick start guide as well as the recommendations by TRaSH: Radarr.

-

Anything that is not mentioned in either guide or that is specific to how I’m setting up stuff will be stated below.

-
Media Management
- -
Quality
-

This is personal preference and it dictates your preferred file sizes. You can follow TRaSH: Quality settings to maximize the quality of the downloaded content and restrict low quality stuff.

-

Personally, I think TRaSH’s quality settings are a bit elitist and first world-y. I’m fine with whatever and the tracker I’m using has the quality I want anyways. I did, however, set it to a minimum of 0 and maximum of 400 for the qualities shown in TRaSH’s guide. Configuring anything below 720p shouldn’t be necessary anyways.

-
Custom Formats
-

Again, this is also completely a personal preference selection and depends on the quality and filters you want. My custom format selections are mostly based on TRaSH: HD Bluray + WEB quality profile.

-

The only Unwanted format that I’m not going to use is the Low Quality (LQ) as it blocks one of the sources I’m using to download a bunch of movies. The reasoning behind the LQ custom format is that these release groups don’t care much about quality (they keep low file sizes) and name tagging, which I understand but I’m fine with this as I can upgrade movies individually whenever I want (I want a big catalog of content that I can quickly watch).

-
Profiles
-

As mentioned in Custom Formats and Quality this is completly a personal preference. I’m going to go for “Low Quality” downloads by still following some of the conventions from TRaSH. I’m using the TRaSH: HD Bluray + WEB quality profile with the exclusion of the LQ profile.

-

I set the name to “HD Bluray + WEB”. I’m also not upgrading the torrents for now. Language set to “Original”.

-
Download clients
-

Pretty straight forward, just click on the giant “+” button and click on the qBitTorrent option. Then configure:

- -

Everything else can be left as default, but maybe change Completed Download Handling if you’d like. Same goes for the general Failed Download Handling download clients’ option.

-
Indexers
-

Also easy to set up, also just click on the giant “+” button and click on the custom Torznab option (you can also use the preset -> Jackett Torznab option). Then configure:

- -

Everything else on default. Download Client can also be set, which can be useful to keep different categories per indexer or something similar. Seed Ratio and Seed Time can also be set and are used to manage when to stop the torrent, this can also be set globally on the qBitTorrent Web UI, this is a personal setting.

-

Download content

-

You can now start to download content by going to Movies -> Add New. Basically just follow the Radarr: How to add a movie guide. The screenshots from the guide are a bit outdated but it contains everything you need to know.

-

I personally use:

- -

Once you click on “Add Movie” it will add it to the Movies section and start searching and selecting the best torrent it finds, according to the “filters” (quality settings, profile and indexer(s)).

-

When it selects a torrent it sends it to qBitTorrent and you can even go ahead and monitor it over there. Else you can also monitor at Activity -> Queue.

-

After the movie is downloaded and processed by Radarr, it will create the appropriate hardlinks to the media/movies directory, as set in First part: Directory structure.

-

Optionally, you can add subtitles using Bazarr.

-

Sonarr

-

Sonarr is a TV series collection manager that can be used to download series via torrents. Most of the install process, configuration and whatnot is going to be basically the same as with Radarr.

-

Install from the AUR with yay:

-
yay -S sonarr
-
-

Add the sonarr user to the servarr group:

-
gpasswd -a sonarr servarr
-
-

The default port that Radarr uses is 8989 for http (the one you need for the reverse proxy).

-

Reverse proxy

-

Basically the same as with Radarr: Reverse proxy, except that the proxy_set_header changes from $proxy_host to $host.

-

Add the following location blocks into the isos.conf, I’ll leave it as sonarr:

-
location /sonarr/ {
-    proxy_pass http://127.0.0.1:8989/sonarr/; # change port if needed
-    proxy_http_version 1.1;
-
-    proxy_set_header Host $proxy_host; # this differs from the radarr reverse proxy
-    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-    proxy_set_header X-Forwarded-Host $host;
-    proxy_set_header X-Forwarded-Proto $scheme;
-    proxy_set_header Upgrade $http_upgrade;
-    proxy_set_header Connection $http_connection;
-
-    proxy_redirect off;
-}
-# Allow the API External Access via NGINX
-location /sonarr/api {
-    auth_basic off;
-    proxy_pass http://127.0.0.1:8989/sonarr/api; # change port if needed
-}
-
-

This is taken from Sonarr: Nginx reverse proxy configuration. Restart the nginx service for the changes to take effect:

-
systemctl restart nginx.service
-
-

Start using Sonarr

-

You can now start/enable the sonarr.service:

-
systemctl enable sonarr.service
-systemctl start sonarr.service
-
-

This will start the service and create the default configs under /var/lib/sonarr. You need to change the URLBase as the reverse proxy is under a subdirectory (/sonarr). Edit /var/lib/sonarr/config.xml:

-
...
-<UrlBase>/sonarr</UrlBase>
-...
-
-

Then restart the sonarr service:

-
systemctl restart sonarr.service
-
-

Now https://isos.yourdomain.com/sonarr is accessible. Secure the instance right away by adding authentication under Settings -> General -> Security. I added the “Forms” option, just fill in the username and password then click on save changes on the top left of the page. You can restart the service again and check that it asks for login credentials.

-

Similar to Radarr if you want to have an anime library, it is recommended to run a second instance of Sonarr for this as shown in Sonarr: Linux multiple instances and follow TRaSH: Release profile regex (anime) and the TRaSH: Anime recommended naming scheme if an anime instance is what you want.

-

Configuration

-

Will be following the official Sonarr: Quick start guide as well as the recommendations by TRaSH: Sonarr.

-

Anything that is not mentioned in either guide or that is specific to how I’m setting up stuff will be stated below.

-
Media Management
- -
Quality
-

Similar to Radarr: Quality this is personal preference and it dictates your preferred file sizes. You can follow TRaSH: Quality settings to maximize the quality of the downloaded content and restrict low quality stuff.

-

Will basically do the same as in Radarr: Quality: set minimum of 0 and maximum of 400 for everything 720p and above.

-
Profiles
-

This is a bit different than with Radarr, the way it is configured is by setting “Release profiles”. I took the profiles from TRaSH: WEB-DL Release profile regex. The only possible change I’ll do is disable the Low Quality Groups and/or the “Golden rule” filter (for x265 encoded video).

-

For me it ended up looking like this:

-
-Sonarr: Release profiles -
Sonarr: Release profiles
-
-

But yours can differ as its mostly personal preference. For the “Quality profile” I’ll be using the default “HD-1080p” most of the time, but I also created a “HD + WEB (720/1080)” which works best for some.

-
Download clients
-

Exactly the same as with Radarr: Download clients only change is the category from movies to tv (or whatever you want), click on the giant “+” button and click on the qBitTorrent option. Then configure:

- -

Everything else can be left as default, but maybe change Completed Download Handling if you’d like. Same goes for the general Failed Download Handling download clients’ option.

-
Indexers
-

Also exactly the same as with Radarr: Indexers, click on the giant “+” button and click on the custom Torznab option (this doesn’t have the Jackett preset). Then configure:

- -

Everything else on default. Download Client can also be set, which can be useful to keep different categories per indexer or something similar. Seed Ratio and Seed Time can also be set and are used to manage when to stop the torrent, this can also be set globally on the qBitTorrent Web UI, this is a personal setting.

-

Download content

-

Almost the same as with Radarr: Download content, but I’ve been personally selecting the torrents I want to download for each season/episode so far, as the indexers I’m using are all over the place and I like consistencies. Will update if I find a (near) 100% automation process, but I’m fine with this anyways as I always monitor that everything is going fine.

-

Add by going to Series -> Add New. Basically just follow the Sonarr: Library add new guide. Adding series needs a bit more options that movies in Radarr, but it’s straight forward.

-

I personally use:

- -

Once you click on “Add X” it will add it to the Series section and will start as monitored. So far I haven’t noticed that it immediately starts downloading (because of the “Start search for missing episodes” setting) but I always click on unmonitor the series, so I can manually check (again, due to the low quality of my indexers).

-

When it automatically starts to download an episode/season it will send it to qBitTorrent and you can monitor it over there. Else you can also monitor at Activity -> Queue. Same thing goes if you download manually each episode/season via the interactive search.

-

To interactively search episodes/seasons go to Series and then click on any series, then click either on the interactive search button for the episode or the season, it is an icon of a person as shown below:

-
-Sonarr: Interactive search button -
Sonarr: Interactive search button
-
-

Then it will bring a window with the search results, where it shows the indexer it got the result from, the size of the torrent, peers, language, quality, the score it received from the configured release profiles an alert in case that the torrent is “bad” and the download button to manually download the torrent you want. An example shown below:

-
-Sonarr: Interactive search results -
Sonarr: Interactive search results
-
-

After the movie is downloaded and processed by Sonarr, it will create the appropriate hardlinks to the media/tv directory, as set in Directory structure.

-

Optionally, you can add subtitles using Bazarr.

-

Jellyfin

-

Jellyfin is a media server “manager”, usually used to manage and organize video content (movies, TV series, etc.) which could be compared with Plex or Emby for example (take them as possible alternatives).

-

Install from the AUR with yay:

-
yay -S jellyfin-bin
-
-

I’m installing the pre-built binary instead of building it as I was getting a lot of errors and the server was even crashing. You can try installing jellyfin instead.

-

Add the jellyfin user to the servarr group:

-
gpasswd -a jellyfin servarr
-
-

You can already start/enable the jellyfin.service which will start at http://127.0.0.1:8096/ by default where you need to complete the initial set up. But let’s create the reverse proxy first then start everything and finish the set up.

-

Reverse proxy

-

I’m going to have my jellyfin instance under a subdomain with an nginx reverse proxy as shown in the Arch wiki. For that, create a jellyfin.conf at the usual sites-<available/enabled> path for nginx:

-
server {
-    listen 80;
-    server_name jellyfin.yourdomain.com; # change accordingly to your wanted subdomain and domain name
-    set $jellyfin 127.0.0.1; # jellyfin is running at localhost (127.0.0.1)
-
-    # Security / XSS Mitigation Headers
-    add_header X-Frame-Options "SAMEORIGIN";
-    add_header X-XSS-Protection "1; mode=block";
-    add_header X-Content-Type-Options "nosniff";
-
-    # Content Security Policy
-    # See: https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
-    # Enforces https content and restricts JS/CSS to origin
-    # External Javascript (such as cast_sender.js for Chromecast) must be whitelisted.
-    add_header Content-Security-Policy "default-src https: data: blob: http://image.tmdb.org; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' https://www.gstatic.com/cv/js/sender/v1/cast_sender.js https://www.youtube.com blob:; worker-src 'self' blob:; connect-src 'self'; object-src 'none'; frame-ancestors 'self'";
-
-        location = / {
-        return 302 https://$host/web/;
-    }
-
-    location / {
-        # Proxy main Jellyfin traffic
-        proxy_pass http://$jellyfin:8096;
-        proxy_set_header Host $host;
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_set_header X-Forwarded-Protocol $scheme;
-        proxy_set_header X-Forwarded-Host $http_host;
-
-        # Disable buffering when the nginx proxy gets very resource heavy upon streaming
-        proxy_buffering off;
-    }
-
-    # location block for /web - This is purely for aesthetics so /web/#!/ works instead of having to go to /web/index.html/#!/
-    location = /web/ {
-        # Proxy main Jellyfin traffic
-        proxy_pass http://$jellyfin:8096/web/index.html;
-        proxy_set_header Host $host;
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_set_header X-Forwarded-Protocol $scheme;
-        proxy_set_header X-Forwarded-Host $http_host;
-    }
-
-    location /socket {
-        # Proxy Jellyfin Websockets traffic
-        proxy_pass http://$jellyfin:8096;
-        proxy_http_version 1.1;
-        proxy_set_header Upgrade $http_upgrade;
-        proxy_set_header Connection "upgrade";
-        proxy_set_header Host $host;
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_set_header X-Forwarded-Protocol $scheme;
-        proxy_set_header X-Forwarded-Host $http_host;
-    }
-}
-
-

SSL certificate

-

Create/extend the certificate by running:

-
certbot --nginx
-
-

Similarly to the isos subdomain, that will autodetect the new subdomain and extend the existing certificate(s). Restart the nginx service for changes to take effect:

-
systemctl restart nginx.service
-
-

Start using Jellyfin

-

You can now start/enable the jellyfin.service if you haven’t already:

-
systemctl enable jellyfin.service
-systemctl start jellyfin.service
-
-

Then navigate to https://jellyfin.yourdomain.com and either continue with the set up wizard if you didn’t already or continue with the next steps to configure your libraries.

-

The initial setup wizard makes you create an user (will be the admin for now) and at least one library, though these can be done later. For more check Jellyfin: Quick start.

-

Remember to use the configured directory as mentioned in Directory structure. Any other configuration (like adding users or libraries) can be done at the dashboard: click on the 3 horizontal lines on the top left of the Web UI then navigate to Administration -> Dashboard. I didn’t configure much other than adding a couple of users for me and friends, I wouldn’t recommend using the admin account to watch (personal preference).

-

Once there is at least one library it will show at Home along with the latest movies (if any) similar to the following (don’t judge, these are just the latest I added due to friend’s requests):

-
-Jellyfin: Home libraries -
Jellyfin: Home libraries
-
-

And inside the “Movies” library you can see the whole catalog where you can filter or just scroll as well as seeing Suggestions (I think this starts getting populated after a while) and Genres:

-
-Jellyfin: Library catalog options -
Jellyfin: Library catalog options
-
-

Plugins

-

You can also install/activate plugins to get extra features. Most of the plugins you might want to use are already available in the official repositories and can be found in the “Catalog”. There are a lot of plugins that are focused around anime and TV metadata, as well as an Open Subtitles plugin to automatically download missing subtitles (though this is managed with Bazarr).

-

To activate plugins click on the 3 horizontal lines on the top left of the Web UI then navigate to Administration -> Dashboard -> Advanced -> Plugins and click on the Catalog tab (top of the Web UI). Here you can select the plugins you want to install. By default only the official ones are shown, for more you can add more repositories.

-

The only plugin I’m using is the “Playback Reporting”, to get a summary of what is being used in the instance. But I might experiment with some anime-focused plugins when the time comes.

-

Transcoding

-

Although not recommended and explicitly set to not download any x265/HEVC content (by using the Golden rule) there might be cases where the only option you have is to download such content. If that is the case and you happen to have a way to do Hardware Acceleration, for example by having an NVIDIA graphics card (in my case) then you should enable it to avoid using lots of resources on your system.

-

Using hardware acceleration will leverage your GPU to do the transcoding and save resources on your CPU. I tried streaming x265 content and it basically used 70-80% on all cores of my CPU, while on the other hand using my GPU it used the normal amount on the CPU (70-80% on a single core).

-

This will be the steps to install on an NVIDIA graphics card, specifically a GTX 1660 Ti. But more info and guides can be found at Jellyfin: Hardware Acceleration for other acceleration methods.

-
NVIDIA drivers
-

Ensure you have the NVIDIA drivers and utils installed. I’ve you’ve done this in the past then you can skip this part, else you might be using the default nouveau drivers. Follow the next steps to set up the NVIDIA drivers, which basically is a summary of NVIDIA: Installation for my setup, so double check the wiki in case you have an older NVIDIA graphics card.

-

Install the nvidia and nvidia-utils packages:

-
pacman -S nvidia nvidia-utils
-
-

Modify /etc/mkinitcpio.conf to remove kms from the HOOKS array. It should look like this (commented line is how it was for me before the change):

-
...
-# HOOKS=(base udev autodetect modconf kms keyboard keymap consolefont block filesystems fsck)
-HOOKS=(base udev autodetect modconf keyboard keymap consolefont block filesystems fsck)
-...
-
-

Regenerate the initramfs by executing:

-
mkinitcpio -P
-
-

Finally, reboot the system. After the reboot you should be able to check your GPU info and processes being run with the GPU by executing nvidia-smi.

-
Enable hardware acceleration
-

Install from the AUR with yay:

-
yay -S jellyfin-ffmpeg6-bin
-
-

This provides the jellyfin-ffmpeg executable, which is necessary for Jellyfin to do hardware acceleration, it needs to be this specific one.

-

Then in the Jellyfin go to the transcoding settings by clicking on the 3 horizontal lines on the top left of the Web UI and navigating to Administration -> Dashboard -> Playback -> Transcoding and:

- -

Don’t forget to click “Save” at the bottom of the Web UI, it will ask if you want to enable hardware acceleration.

-

Bazarr

-

Bazarr is a companion for Sonarr and Radarr that manages and downloads subtitles.

-

Install from the AUR with yay:

-
yay -S bazarr
-
-

Add the bazarr user to the servarr group:

-
gpasswd -a bazarr servarr
-
-

The default port that Bazarr uses is 6767 for http (the one you need for the reverse proxy), and it has pre-configured the default ports for Radarr and Sonarr.

-

Reverse proxy

-

Basically the same as with Radarr: Reverse proxy and Sonarr: Reverse proxy.

-

Add the following setting in the server block of the isos.conf:

-
server {
-    # server_name and other directives
-    ...
-
-    # Increase http2 max sizes
-    large_client_header_buffers 4 16k;
-
-    # some other blocks like location blocks
-    ...
-}
-
-

Then add the following location blocks in the isos.conf, where I’ll keep it as /bazarr/:

-
location /bazarr/ {
-    proxy_pass http://127.0.0.1:6767/bazarr/; # change port if needed
-    proxy_http_version 1.1;
-
-    proxy_set_header X-Real-IP $remote_addr;
-    proxy_set_header Host $http_host;
-    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-    proxy_set_header X-Forwarded-Proto $scheme;
-    proxy_set_header Upgrade $http_upgrade;
-    proxy_set_header Connection "Upgrade";
-
-    proxy_redirect off;
-}
-# Allow the Bazarr API through if you enable Auth on the block above
-location /bazarr/api {
-    auth_request off;
-    proxy_pass http://127.0.0.1:6767/bazarr/api;
-}
-
-

This is taken from Bazarr: Reverse proxy help. Restart the nginx service for the changes to take effect:

-
systemctl restart nginx.service
-
-

Start using Bazarr

-

You can now start/enable the bazarr.service if you haven’t already:

-
systemctl start bazarr.service
-systemctl enable bazarr.service
-
-

This will start the service and create the default configs under /var/lib/bazarr. You need to change the base_url for the necessary services as they’re running under a reverse proxy and under subdirectories. Edit /var/lib/bazarr/config/config.ini:

-
[general]
-port = 6767
-base_url = /bazarr
-
-[sonarr]
-port = 8989
-base_url = /sonarr
-
-[radarr]
-port = 7878
-base_url = /radarr
-
-

Then restart the bazarr service:

-
systemctl restart bazarr.service
-
-

Now https://isos.yourdomain.com/bazarr is accessible. Secure the instance right away by adding authentication under Settings -> General -> Security. I added the “Forms” option, just fill in the username and password then click on save changes on the top left of the page. You can restart the service again and check that it asks for login credentials. I also disabled Settings -> General -> Updates -> Automatic.

-

Configuration

-

Will be following the official Bazarr: Setup guide as well as the recommendations by TRaSH: Bazarr.

-

Anything that is not mentioned in either guide or that is specific to how I’m setting up stuff will be stated below.

-
Providers
-

This doesn’t require much thinking and its up to personal preference, but I’ll list the ones I added:

- -

I’ve tested this setup for the following languages (with all default settings as stated in the guides):

- -

I tried with “Latin American Spanish” but they’re hard to find, those two work pretty good.

-

None of these require an Anti-Captcha account (which is a paid service), but I created one anyways in case I need it. Though you need to add credits to it (pretty cheap though) if you ever use it.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/learned_go_and_lua_hard_way.html b/live/blog/a/learned_go_and_lua_hard_way.html deleted file mode 100644 index dc62d96..0000000 --- a/live/blog/a/learned_go_and_lua_hard_way.html +++ /dev/null @@ -1,159 +0,0 @@ - - - - - - -I had to learn Go and Lua the hard way -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

I had to learn Go and Lua the hard way

- -

TL;DR: I learned Go and Lua the hard way by forking (for fixing):

- -

In the last couple of days I’ve been setting up a Komga server for manga downloaded using metafates/mangal (upcoming set up entry about it) and everything was fine so far until I tried to download One Piece from MangaDex of which mangal has a built-in scraper. Long story short the issue was that MangaDex’s API only allows requesting manga chapters on chunks of 500 and the way that was being handled was completely wrong, specifics can be found on my commit (and the subsequent minor fix commit).

-

I tried to do a PR, but the project hasn’t been active since Feb 2023 (same reason I didn’t even try to do PRs on the other repos) so I closed it and will start working on my own fork, probaly just merging everything Belphemur‘s fork has to offer, as he’s been working on mangal actively. I could probably just fork from him and/or just submit PR requests to him, but I think I saw some changes I didn’t really like, will have to look more into it.

-

Also, while trying to use some of the custom scrapers I ran into issues with the headless chrome explorer implementation where it didn’t close on each manga chapter download, causig my CPU and Mem usage to get maxed out and losing control of the system, so I had to also fork the metafates/mangal-lua-libs and “fixed” (I say fixed because that wasn’t the issue at the end, it was how the custom scrapers where using it, shitty documentation) the issue by adding the browser.Close() function to the headless Lua API (commit) and merged some commits from the original vadv/gopher-lua-libs just to include any features added to the Lua libs needed.

-

Finally I forked the metafates/mangal-scrapers (which I actually forked NotPhantomX‘s fork as they had included more scrapers from some pull requests) to be able to have updated custom Lua scrapers (in which I also fixed the headless bullshit) and use them on my mangal.

-

So, I went into the rabbit hole of manga scrapping because I wanted to set up my Komga server, and more importantly I had to quickly learn Go and Lua (Lua was easier) and I have to say that Go is super convoluted on the module management, all research I did lead me to totally different responses, but it is just because of different Go versions and the year of the responses.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/mail_server_with_postfix.html b/live/blog/a/mail_server_with_postfix.html deleted file mode 100644 index defe607..0000000 --- a/live/blog/a/mail_server_with_postfix.html +++ /dev/null @@ -1,527 +0,0 @@ - - - - - - -Set up a Mail server with Postfix, Dovecot, SpamAssassin and OpenDKIM -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Set up a Mail server with Postfix, Dovecot, SpamAssassin and OpenDKIM

- -

The entry is going to be long because it’s a tedious process. This is also based on Luke Smith’s script, but adapted to Arch Linux (his script works on debian-based distributions). This entry is mostly so I can record all the notes required while I’m in the process of installing/configuring the mail server on a new VPS of mine; also I’m going to be writing a script that does everything in one go (for Arch Linux), that will be hosted here. I haven’t had time to do the script so nevermind this, if I ever do it I’ll make a new entry regarding it.

-

This configuration works for local users (users that appear in /etc/passwd), and does not use any type of SQL database. Do note that I’m not running Postfix in a chroot, which can be a problem if you’re following my steps as noted by Bojan; in the case that you want to run in chroot then add the steps chown in the Arch wiki: Postfix in a chroot jail; the issue faced if following my steps and using a chroot is that there will be issues resolving the hostname due to /etc/hosts or /etc/hostname not being available in the chroot.

-

All commands executed here are run with root privileges, unless stated otherwise.

-

Table of contents

- -

Prerequisites

-

Basically the same as with the website with Nginx and Certbot, with the extras:

- -

Postfix

-

Postfix is a “mail transfer agent” which is the component of the mail server that receives and sends emails via SMTP.

-

Install the postfix package:

-
pacman -S postfix
-
-

We have two main files to configure (inside /etc/postfix): master.cf (master(5)) and main.cf (postconf(5)). We’re going to edit main.cf first either by using the command postconf -e 'setting' or by editing the file itself (I prefer to edit the file).

-

Note that the default file itself has a lot of comments with description on what each thing does (or you can look up the manual, linked above), I used what Luke’s script did plus some other settings that worked for me.

-

Now, first locate where your website cert is, mine is at the default location /etc/letsencrypt/live/, so my certdir is /etc/letsencrypt/live/luevano.xyz. Given this information, change {yourcertdir} on the corresponding lines. The configuration described below has to be appended in the main.cf configuration file.

-

Certificates and ciphers to use for authentication and security:

-
smtpd_tls_key_file = {yourcertdir}/privkey.pem
-smtpd_tls_cert_file = {yourcertdir}/fullchain.pem
-smtpd_use_tls = yes
-smtpd_tls_auth_only = yes
-smtp_tls_security_level = may
-smtp_tls_loglevel = 1
-smtp_tls_CAfile = {yourcertdir}/cert.pem
-smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
-smtp_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
-smtpd_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
-smtp_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
-tls_preempt_cipherlist = yes
-smtpd_tls_exclude_ciphers = aNULL, LOW, EXP, MEDIUM, ADH, AECDH, MD5,
-                DSS, ECDSA, CAMELLIA128, 3DES, CAMELLIA256,
-                RSA+AES, eNULL
-
-smtp_tls_CApath = /etc/ssl/certs
-smtpd_tls_CApath = /etc/ssl/certs
-
-smtpd_relay_restrictions = permit_sasl_authenticated, permit_mynetworks, defer_unauth_destination
-
-

Also, for the connection with dovecot, append the next few lines (telling postfix that dovecot will use user/password for authentication):

-
smtpd_sasl_auth_enable = yes
-smtpd_sasl_type = dovecot
-smtpd_sasl_path = private/auth
-smtpd_sasl_security_options = noanonymous, noplaintext
-smtpd_sasl_tls_security_options = noanonymous
-
-

Specify the mailbox home, this is going to be a directory inside your user’s home containing the actual mail files, for example it will end up being/home/david/Mail/Inbox:

-
home_mailbox = Mail/Inbox/
-
-

Pre-configuration to work seamlessly with dovecot and opendkim:

-
myhostname = {yourdomainname}
-mydomain = localdomain
-mydestination = $myhostname, localhost.$mydomain, localhost
-
-milter_default_action = accept
-milter_protocol = 6
-smtpd_milters = inet:127.0.0.1:8891
-non_smtpd_milters = inet:127.0.0.1:8891
-mailbox_command = /usr/lib/dovecot/deliver
-
-

Where {yourdomainname} is luevano.xyz in my case. Lastly, if you don’t want the sender’s IP and user agent (application used to send the mail), add the following line:

-
smtp_header_checks = regexp:/etc/postfix/smtp_header_checks
-
-

And create the /etc/postfix/smtp_header_checks file with the following content:

-
/^Received: .*/     IGNORE
-/^User-Agent: .*/   IGNORE
-
-

That’s it for main.cf, now we have to configure master.cf. This one is a bit more tricky.

-

First look up lines (they’re uncommented) smtp inet n - n - - smtpd, smtp unix - - n - - smtp and -o syslog_name=postfix/$service_name and either delete or uncomment them… or just run sed -i "/^\s*-o/d;/^\s*submission/d;/\s*smtp/d" /etc/postfix/master.cf as stated in Luke’s script.

-

Lastly, append the following lines to complete postfix setup and pre-configure for spamassassin.

-
smtp unix - - n - - smtp
-smtp inet n - y - - smtpd
-    -o content_filter=spamassassin
-submission inet n - y - - smtpd
-    -o syslog_name=postfix/submission
-    -o smtpd_tls_security_level=encrypt
-    -o smtpd_sasl_auth_enable=yes
-    -o smtpd_tls_auth_only=yes
-smtps inet n - y - - smtpd
-    -o syslog_name=postfix/smtps
-    -o smtpd_tls_wrappermode=yes
-    -o smtpd_sasl_auth_enable=yes
-spamassassin unix - n n - - pipe
-    user=spamd argv=/usr/bin/vendor_perl/spamc -f -e /usr/sbin/sendmail -oi -f \${sender} \${recipient}
-
-

Now, I ran into some problems with postfix, one being smtps: Servname not supported for ai_socktype, to fix it, as Till posted in that site, edit /etc/services and add:

-
smtps 465/tcp
-smtps 465/udp
-
-

Before starting the postfix service, you need to run newaliases first, but you can do a bit of configuration beforehand editing the file /etc/postfix/aliases. I only change the root: you line (where you is the account that will be receiving “root” mail). After you’re done, run:

-
postalias /etc/postfix/aliases
-newaliases
-
-

At this point you’re done configuring postfix and you can already start/enable the postfix service:

-
systemctl start postfix.service
-systemctl enable postfix.service
-
-

Dovecot

-

Dovecot is an IMAP and POP3 server, which is what lets an email application retrieve the mail.

-

Install the dovecot and pigeonhole (sieve for dovecot) packages:

-
pacman -S dovecot pigeonhole
-
-

On arch, by default, there is no /etc/dovecot directory with default configurations set in place, but the package does provide the example configuration files. Create the dovecot directory under /etc and, optionally, copy the dovecot.conf file and conf.d directory under the just created dovecot directory:

-
mkdir /etc/dovecot
-cp /usr/share/doc/dovecot/example-config/dovecot.conf /etc/dovecot/dovecot.conf
-cp -r /usr/share/doc/dovecot/example-config/conf.d /etc/dovecot
-
-

As Luke stated, dovecot comes with a lot of “modules” (under /etc/dovecot/conf.d/ if you copied that folder) for all sorts of configurations that you can include, but I do as he does and just edit/create the whole dovecot.conf file; although, I would like to check each of the separate configuration files dovecot provides I think the options Luke provides are more than good enough.

-

I’m working with an empty dovecot.conf file. Add the following lines for SSL and login configuration (also replace {yourcertdir} with the same certificate directory described in the Postfix section above, note that the < is required):

-
ssl = required
-ssl_cert = <{yourcertdir}/fullchain.pem
-ssl_key = <{yourcertdir}/privkey.pem
-ssl_min_protocol = TLSv1.2
-ssl_cipher_list = ALL:!RSA:!CAMELLIA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SHA1:!SHA256:!SHA384:!LOW@STRENGTH
-ssl_prefer_server_ciphers = yes
-ssl_dh = </etc/dovecot/dh.pem
-
-auth_mechanisms = plain login
-auth_username_format = %n
-protocols = $protocols imap
-
-

You may notice we specify a file we don’t have under /etc/dovecot: dh.pem. We need to create it with openssl (you should already have it installed if you’ve been following this entry and the one for nginx). Just run (might take a few minutes):

-
openssl dhparam -out /etc/dovecot/dh.pem 4096
-
-

After that, the next lines define what a “valid user is” (really just sets the database for users and passwords to be the local users with their password):

-
userdb {
-    driver = passwd
-}
-
-passdb {
-    driver = pam
-}
-
-

Next, comes the mail directory structure (has to match the one described in the Postfix section). Here, the LAYOUT option is important so the boxes are .Sent instead of Sent. Add the next lines (plus any you like):

-
mail_location = maildir:~/Mail:INBOX=~/Mail/Inbox:LAYOUT=fs
-namespace inbox {
-    inbox = yes
-
-    mailbox Drafts {
-        special_use = \Drafts
-        auto = subscribe
-        }
-
-    mailbox Junk {
-        special_use = \Junk
-        auto = subscribe
-        autoexpunge = 30d
-        }
-
-    mailbox Sent {
-        special_use = \Sent
-        auto = subscribe
-        }
-
-    mailbox Trash {
-        special_use = \Trash
-        }
-
-    mailbox Archive {
-        special_use = \Archive
-        }
-}
-
-

Also include this so Postfix can use Dovecot’s authentication system:

-
service auth {
-    unix_listener /var/spool/postfix/private/auth {
-        mode = 0660
-        user = postfix
-        group = postfix
-        }
-}
-
-

Lastly (for Dovecot at least), the plugin configuration for sieve (pigeonhole):

-
protocol lda {
-    mail_plugins = $mail_plugins sieve
-}
-
-protocol lmtp {
-    mail_plugins = $mail_plugins sieve
-}
-
-plugin {
-    sieve = ~/.dovecot.sieve
-    sieve_default = /var/lib/dovecot/sieve/default.sieve
-    sieve_dir = ~/.sieve
-    sieve_global_dir = /var/lib/dovecot/sieve/
-
-

Where /var/lib/dovecot/sieve/default.sieve doesn’t exist yet. Create the folders:

-
mkdir -p /var/lib/dovecot/sieve
-
-

And create the file default.sieve inside that just created folder with the content:

-
require ["fileinto", "mailbox"];
-if header :contains "X-Spam-Flag" "YES" {
-    fileinto "Junk";
-}
-
-

Now, if you don’t have a vmail (virtual mail) user, create one and change the ownership of the /var/lib/dovecot directory to this user:

-
grep -q "^vmail:" /etc/passwd || useradd -m vmail -s /usr/bin/nologin
-chown -R vmail:vmail /var/lib/dovecot
-
-

Note that I also changed the shell for vmail to be /usr/bin/nologin. After that, to compile the configuration file run:

-
sievec /var/lib/dovecot/sieve/default.sieve
-
-

A default.svbin file will be created next to default.sieve.

-

Next, add the following lines to /etc/pam.d/dovecot if not already present (shouldn’t be there if you’ve been following these notes):

-
auth required pam_unix.so nullok
-account required pam_unix.so
-
-

That’s it for Dovecot, at this point you can start/enable the dovecot service:

-
systemctl start dovecot.service
-systemctl enable dovecot.service
-
-

OpenDKIM

-

OpenDKIM is needed so services like G**gle don’t throw the mail to the trash. DKIM stands for “DomainKeys Identified Mail”.

-

Install the opendkim package:

-
pacman -S opendkim
-
-

Generate the keys for your domain:

-
opendkim-genkey -D /etc/opendkim -d {yourdomain} -s {yoursubdomain} -r -b 2048
-
-

Where you need to change {yourdomain} and {yoursubdomain} (doesn’t really need to be the sub-domain, could be anything that describes your key) accordingly, for me it’s luevano.xyz and mail, respectively. After that, we need to create some files inside the /etc/opendkim directory. First, create the file KeyTable with the content:

-
{yoursubdomain}._domainkey.{yourdomain} {yourdomain}:{yoursubdomain}:/etc/opendkim/{yoursubdomain}.private
-
-

So, for me it would be:

-
mail._domainkey.luevano.xyz luevano.xyz:mail:/etc/opendkim/mail.private
-
-

Next, create the file SigningTable with the content:

-
*@{yourdomain} {yoursubdomain}._domainkey.{yourdomain}
-
-

Again, for me it would be:

-
*@luevano.xyz mail._domainkey.luevano.xyz
-
-

And, lastly create the file TrustedHosts with the content:

-
127.0.0.1
-::1
-10.1.0.0/16
-1.2.3.4/24
-localhost
-{yourserverip}
-...
-
-

And more, make sure to include your server IP and something like subdomain.domainname.

-

Next, edit /etc/opendkim/opendkim.conf to reflect the changes (or rather, addition) of these files, as well as some other configuration. You can look up the example configuration file located at /usr/share/doc/opendkim/opendkim.conf.sample, but I’m creating a blank one with the contents:

-
Domain {yourdomain}
-Selector {yoursubdomain}
-
-Syslog Yes
-UserID opendkim
-
-KeyFile /etc/opendkim/{yoursubdomain}.private
-Socket inet:8891@localhost
-
-

Now, change the permissions for all the files inside /etc/opendkim:

-
chown -R root:opendkim /etc/opendkim
-chmod g+r /etc/postfix/dkim/*
-
-

I’m using root:opendkim so opendkim doesn’t complain about the {yoursubdomani}.private being insecure (you can change that by using the option RequireSafeKeys False in the opendkim.conf file, as stated here).

-

That’s it for the general configuration, but you could go more in depth and be more secure with some extra configuration.

-

Now, just start/enable the opendkim service:

-
systemctl start opendkim.service
-systemctl enable opendkim.service
-
-

OpenDKIM DNS TXT records

-

Add the following TXT records on your domain registrar (these examples are for Epik):

-
    -
  1. DKIM entry: look up your {yoursubdomain}.txt file, it should look something like:
  2. -
-
{yoursubdomain}._domainkey IN TXT ( "v=DKIM1; k=rsa; s=email; "
-    "p=..."
-    "..." )  ; ----- DKIM key mail for {yourdomain}
-
-

In the TXT record you will place {yoursubdomain}._domainkey as the “Host” and "v=DKIM1; k=rsa; s=email; " "p=..." "..." in the “TXT Value” (replace the dots with the actual value you see in your file).

-
    -
  1. -

    DMARC entry: just _dmarc.{yourdomain} as the “Host” and "v=DMARC1; p=reject; rua=mailto:dmarc@{yourdomain}; fo=1" as the “TXT Value”.

    -
  2. -
  3. -

    SPF entry: just @ as the “Host” and "v=spf1 mx a:{yoursubdomain}.{yourdomain} - all" as the “TXT Value”.

    -
  4. -
-

And at this point you could test your mail for spoofing and more.

-

SpamAssassin

-

SpamAssassin is just a mail filter to identify spam.

-

Install the spamassassin package (which will install a bunch of ugly perl packages…):

-
pacman -S spamassassin
-
-

For some reason, the permissions on all spamassassin stuff are all over the place. First, change owner of the executables, and directories:

-
chown spamd:spamd /usr/bin/vendor_perl/sa-*
-chown spamd:spamd /usr/bin/vendor_perl/spam*
-chwown -R spamd:spamd /etc/mail/spamassassin
-
-

Then, you can edit local.cf (located in /etc/mail/spamassassin) to fit your needs (I only uncommented the rewrite_header Subject ... line). And then you can run the following command to update the patterns and compile them:

-
sudo -u spamd sa-update
-sudo -u spamd sa-compile
-
-

And since this should be run periodically, create the service spamassassin-update.service under /etc/systemd/system with the following content:

-
[Unit]
-Description=SpamAssassin housekeeping
-After=network.target
-
-[Service]
-User=spamd
-Group=spamd
-Type=oneshot
-
-ExecStart=/usr/bin/vendor_perl/sa-update --allowplugins
-SuccessExitStatus=1
-ExecStart=/usr/bin/vendor_perl/sa-compile
-ExecStart=/usr/bin/systemctl -q --no-block try-restart spamassassin.service
-
-

And you could also execute sa-learn to train spamassassin‘s bayes filter, but this works for me. Then create the timer spamassassin-update.timer under the same directory, with the content:

-
[Unit]
-Description=SpamAssassin housekeeping
-
-[Timer]
-OnCalendar=daily
-Persistent=true
-
-[Install]
-WantedBy=timers.target
-
-

You can now start/enable the spamassassin-update timer:

-
systemctl start spamassassin-update.timer
-systemctl enable spamassassin-update.timer
-
-

Next, you may want to edit the spamassassin service before starting and enabling it, because by default, it could spawn a lot of “childs” eating a lot of resources and you really only need one child. Append --max-children=1 to the line ExecStart=... in /usr/bin/systemd/system/spamassassin.service:

-
...
-ExecStart=/usr/bin/vendor_perl/spamd -x -u spamd -g spamd --listen=/run/spamd/spamd.sock --listen=localhost --max-children=1
-...
-
-

Finally, start and enable the spamassassin service:

-
systemctl start spamassassin.service
-systemctl enable spamassassin.service
-
-

Wrapping up

-

We should have a working mail server by now. Before continuing check your journal logs (journalctl -xe --unit={unit}, where {unit} could be spamassassin.service for example) to see if there was any error whatsoever and try to debug it, it should be a typo somewhere because all the settings and steps detailed here just worked; I literally just finished doing everything on a new server as of the writing of this text, it just werks on my machine.

-

Now, to actually use the mail service: first of all, you need a normal account (don’t use root) that belongs to the mail group (gpasswd -a user group to add a user user to group group) and that has a password.

-

Next, to actually login into a mail app/program, you will use the following settings, at least for thunderdbird(I tested in windows default mail app and you don’t need a lot of settings):

- -

All that’s left to do is test your mail server for spoofing, and to see if everything is setup correctly. Go to DKIM Test and follow the instructions (basically click next, and send an email with whatever content to the email that they provide). After you send the email, you should see something like:

-
-DKIM Test successful -
DKIM Test successful
-
- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/manga_server_with_komga.html b/live/blog/a/manga_server_with_komga.html deleted file mode 100644 index f99d3c9..0000000 --- a/live/blog/a/manga_server_with_komga.html +++ /dev/null @@ -1,539 +0,0 @@ - - - - - - -Set up a manga server with Komga and mangal -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Set up a manga server with Komga and mangal

- -

I’ve been wanting to set up a manga media server to hoard some mangas/comics and access them via Tachiyomi, but I didn’t have enough space in my vultr VPS. Now that I have symmetric fiber optic at home and my spare PC to use as a server I decided to go ahead and create one. As always, i use arch btw so these instructions are specifically for it, I’m not sure how easier/harder it is for other distros, I’m just too comfortable with arch honestly.

-

I’m going to run it as an exposed service using a subdomain of my own, so the steps are taking that into account, if you want to run it locally (or on a LAN/VPN) then it is going to be easier/with less steps (you’re on your own). Also, as you might notice I don’t like to use D*ck*r images or anything (ew).

-

At the time of editing this entry (06-28-2023) Komga has already upgraded to v.1.0.0 and it introduces some breaking changes if you already had your instance set up. Read more here. The only change I did here was changing the port to the new default.

-

As always, all commands are run as root unless stated otherwise.

-

Table of contents

- -

Prerequisites

-

Similar to my early tutorial entries, if you want it as a subdomain:

- -

yay

-

This is the first time I mention the AUR (and yay) in my entries, so I might as well just write a bit about it.

-

The AUR is the Arch Linux User Repository and it’s basically like an extension of the official one which is supported by the community, the only thing is that it requires a different package manager. The one I use (and I think everyone does, too) is yay, which as far as I know is like a wrapper of pacman.

-

Install

-

To install and use yay we need a normal account with sudo access, all the commands related to yay are run as normal user and then it asks for sudo password. Installation its straight forward: clone yay repo and install. Only dependencies are git and base-devel:

-

Install dependencies:

-
sudo pacman -S git base-devel
-
-

Clone yay and install it (I also like to delete the cloned git repo):

-
git clone git@github.com:Jguer/yay.git
-cd yay
-makepkg -si
-cd ..
-sudo rm -r yay
-
-

Usage

-

yay is used basically the same as pacman with the difference that it is run as normal user (then later requiring sudo password) and that it asks extra input when installing something, such as if we want to build the package from source or if we want to show package diffs.

-

To install a package (for example Komga in this blog entry), run:

-
yay -S komga
-
-

mangal

-

mangal is a CLI/TUI manga downloader with anilist integration and custom Lua scrapers.

-

You could install it from the AUR with yay:

-
yay -S mangal-bin
-
-

But I’ll use my fork as it contains some fixes and extra stuff.

-

Install from source

-

As I mentioned in my past entry I had to fork mangal and related repositories to fix/change a few things. Currently the major fix I did in mangal is for the built in MangaDex scraper which had really annoying bug in the chunking of the manga chapter listing.

-

So instad of installing with yay we’ll build it from source. We need to have go installed:

-
pacman -S go
-
-

Then clone my fork of mangal and install it:

-
git clone https://github.com/luevano/mangal.git # not sure if you can use SSH to clone
-cd mangal
-make install # or just `make build` and then move the binary to somewhere in your $PATH
-
-

This will use go install so it will install to a path specified by the go environment variables, for more run go help install. It was installed to $HOME/.local/bin/go/mangal for me because my env vars, then just make sure this is included in your PATH.

-

Check it was correctly installed by running mangal version, which should print something like:

-
▇▇▇ mangal
-
-  Version         ...
-  Git Commit      ...
-  Build Date      ...
-  Built By        ...
-  Platform        ...
-
-

Configuration

-

I’m going to do everything with a normal user (manga-dl) which I created just to download manga. So all of the commands will be run without sudo/root privileges.

-

Change some of the configuration options:

-
mangal config set -k downloader.path -v "/mnt/d/mangal" # downloads to current dir by default
-mangal config set -k formats.use -v "cbz" # downloads as pdf by default
-mangal config set -k installer.user -v "luevano" # points to my scrapers repository which contains a few extra scrapers and fixes, defaults to metafates' one; this is important if you're using my fork, don't use otherwise as it uses extra stuff I added
-mangal config set -k logs.write -v true # I like to get logs for what happens
-
-

Note: For testing purposes (if you want to explore mangal) set downloader.path once you’re ready to start to populate the Komga library directory (at Komga: populate manga library).

-

For more configs and to read what they’re for:

-
mangal config info
-
-

Also install the custom Lua scrapers by running:

-
mangal sources install
-
-

And install whatever you want, it picks up the sources/scrapers from the configured repository (installer.<key> config), if you followed, it will show my scrapers.

-

Usage

-

Two main ways of using mangal:

- -

Headless browser

-

Before continuing, I gotta say I went through some bullshit while trying to use the custom Lua scrapers that use the headless browser (actually just a wrapper of go-rod/rod, and honestly it is not really a “headless” browser, mangal “documentation” is just wrong). For more on my rant check out my last entry.

-

There is no concrete documentation on the “headless” browser, only that it is automatically set up and ready to use… but it doesn’t install any library/dependency needed. I discovered the following libraries that were missing on my Arch minimal install:

- -

To install them::

-
pacman -S nss at-spi2-core libcups libdrm libxcomposite libxdamage libxrandr mesa libxkbcommon pango alsa-lib
-
-

I can’t guarantee that those are all the packages needed, those are the ones I happen to discover (had to fork the lua libs and add some logging because the error message was too fucking generic).

-

These dependencies are probably met by installing either chromedriver or google-chrome from the AUR (for what I could see on the package dependencies).

-

TUI

-

Use the TUI by running

-
mangal
-
-

Download manga using the TUI by selecting the source/scrapper, search the manga/comic you want and then you can select each chapter to download (use tab to select all). This is what I use when downloading manga that already finished publishing, or when I’m just searching and testing out how it downloads the manga (directory name, and manga information).

-

Note that some scrapters will contain duplicated chapters, as they have multiple uploaded chapters from the community, usually for different scanlation groups. This happens a lot with MangaDex.

-

Inline

-

The inline mode is a single terminal command meant to be used to automate stuff or for more advanced options. You can peek a bit into the “documentation” which honestly it’s ass because it doesn’t explain much. The minimal command for inline according to the mangal help is:

-
mangal inline --manga <option> --query <manga-title>
-
-

But this will not produce anything because it also needs --source (or set the default using the config key downloader.default_sources) and either --json which basically just does the search and returns the result in json format or --download to actually download whatever is found; I recommend to do --json first to check that the correct manga will be downloaded then do --download.

-

Something not mentioned anywhere is the --manga flag options (found it at the source code), it has 3 available options:

- -

Similar to --chapters, there are a few options not explained (that I found at the source code, too). I usually just use all but other options:

- -

That said, I’ll do an example by using Mangapill as source, and will search for Demon Slayer: Kimetsu no Yaiba:

-
    -
  1. Search first and make sure my command will pull the manga I want:
  2. -
-
mangal inline --source "Mangapill" --manga "exact" --query "Kimetsu no Yaiba" --json | jq # I use jq to pretty format the output
-
-
    -
  1. I make sure the json output contains the correct manga information: name, url, etc..
  2. -
- -
mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922
-
-
    -
  1. If I’m okay with the outputs, then I change --json for --download to actually download:
  2. -
-
mangal inline --source "Mangapill" --manga "exact" --query "Kimetsu no Yaiba" --download
-
-
    -
  1. Check if the manga is downloaded correctly. I do this by going to my download directory and checking the directory name (I’m picky with this stuff), that all chapters where downloaded, that it includes a correct series.json file and it contains a cover.<img-ext>; this usually means it correctly pulled information from anilist and that it will contain metadata Komga will be able to use.
  2. -
-

Automation

-

The straight forward approach for automation is just to bundle a bunch of mangal inline commands in a shell script and schedule it’s execution either via cron or systemd/Timers. But, as always, I overcomplicated/overengineered my approach, which is the following:

-
    -
  1. Group manga names per source.
  2. -
  3. Configure anything that should always be set before executing mangal, this includes anilist bindings.
  4. -
  5. Have a way to track the changes/updates on each run.
  6. -
  7. Use that tracker to know where to start downloading chapters from.
      -
    • This is optional, as you can just do --chapters "all" and it will work but I do it mostly to keep the logs/output cleaner/shorter.
    • -
    -
  8. -
  9. Download/update each manga using mangal inline.
  10. -
  11. Wrap everything in a systemd service and timer.
  12. -
-

Manga list example:

-
mangapill="Berserk|Chainsaw Man|Dandadan|Jujutsu Kaisen|etc..."
-
-

Function that handles the download per manga in the list:

-
mangal_src_dl () {
-    source_name=$1
-    manga_list=$(echo "$2" | tr '|' '\n')
-
-    while IFS= read -r line; do
-        # By default download all chapters
-        chapters="all"
-        last_chapter_n=$(grep -e "$line" "$TRACKER_FILE" | cut -d'|' -f2 | grep -v -e '^$' | tail -n 1)
-        if [ -n "${last_chapter_n}" ]; then
-            chapters="$last_chapter_n-9999"
-            echo "Downloading [${last_chapter_n}-] chapters for $line from $source_name..."
-        else
-            echo "Downloading all chapters for $line from $source_name..."
-        fi
-        dl_output=$(mangal inline -S "$source_name" -q "$line" -m "exact" -F "$DOWNLOAD_FORMAT" -c "$chapters" -d)
-
-        if [ $? -ne 0 ]; then
-            echo "Failed to download chapters for $line."
-            continue
-        fi
-
-        line_count=$(echo "$dl_output" | grep -v -e '^$' | wc -l)
-        if [ $line_count -gt 0 ]; then
-            echo "Downloaded $line_count chapters for $line:"
-            echo "$dl_output"
-            new_last_chapter_n=$(echo "$dl_output" | tail -n 1 | cut -d'[' -f2 | cut -d']' -f1)
-            # manga_name|last_chapter_number|downloaded_chapters_on_this_update|manga_source
-            echo "$line|$new_last_chapter_n|$line_count|$source_name" >> $TRACKER_FILE
-        else
-            echo "No new chapters for $line."
-        fi
-    done <<< "$manga_list"
-}
-
-

Where $TRACKER_FILE is just a variable holding a path to some file where you can store the tracking and $DOWNLOAD_FORMAT the format for the mangas, for me it’s cbz. Then the usage would be something like mangal_src_dl "Mangapill" "$mangapill", meaning that it is a function call per source.

-

A simpler function without “tracking” would be:

-
mangal_src_dl () {
-    source_name=$1
-    manga_list=$(echo "$2" | tr '|' '\n')
-
-    while IFS= read -r line; do
-        echo "Downloading all chapters for $line from $source_name..."
-        mangal inline -S "$source_name" -q "$line" -m "exact" -F "$DOWNLOAD_FORMAT" -c "all" -d
-        if [ $? -ne 0 ]; then
-            echo "Failed to download chapters for $line."
-            continue
-        fi
-        echo "Finished downloading chapters for $line."
-    done <<< "$manga_list"
-}
-
-

The tracker file would have a format like follows:

-
# Updated: 06/10/23 10:53:15 AM CST
-Berserk|0392|392|Mangapill
-Dandadan|0110|110|Mangapill
-...
-
-

And note that if you already had manga downloaded and you run the script for the first time, then it will show as if it downloaded everything from the first chapter, but that’s just how mangal works, it will actually just discover downloaded chapters and only download anything missing.

-

Any configuration the downloader/updater might need needs to be done before the mangal_src_dl calls. I like to configure mangal for download path, format, etc.. I found that it is needed to clear the mangal and rod browser cache (headless browser used in some custom sources) from personal experience and from others: mangal#170 and kaizoku#89.

-

Also you should set any anilist binding necessary for the downloading (as the cache was cleared). An example of an anilist binding I had to do is for Mushoku Tensei, as it has both a light novel and manga version, which for me it’s the following binding:

-
mangal inline anilist set --name "Mushoku Tensei - Isekai Ittara Honki Dasu" --id 85564
-
-

Finally is just a matter of using your prefered way of scheduling, I’ll use systemd/Timers but anything is fine. You could make the downloader script more sophisticated and only running every week on which each manga gets (usually) released but that’s too much work; I’ll just run it once daily probably.

-

A feature I want to add and probably will is sending notifications (probably through email) on a summary for manga downloaded or failed to download so I’m on top of the updates. For now this is good enough and it’s been working so far.

-

Komga

-

Komga is a comics/mangas media server.

-

Install from the AUR:

-
yay -S komga
-
-

This komga package creates a komga (service) user and group which is tied to the also included komga.service.

-

Configure it by editing /etc/komga.conf:

-
SERVER_PORT=25600
-SERVER_SERVLET_CONTEXT_PATH=/ # this depends a lot of how it's going to be served (domain, subdomain, ip, etc)
-
-KOMGA_LIBRARIES_SCAN_CRON="0 0 * * * ?"
-KOMGA_LIBRARIES_SCAN_STARTUP=false
-KOMGA_LIBRARIES_SCAN_DIRECTORY_EXCLUSIONS='#recycle,@eaDir,@Recycle'
-KOMGA_FILESYSTEM_SCANNER_FORCE_DIRECTORY_MODIFIED_TIME=false
-KOMGA_REMEMBERME_KEY=USE-WHATEVER-YOU-WANT-HERE
-KOMGA_REMEMBERME_VALIDITY=2419200
-
-KOMGA_DATABASE_BACKUP_ENABLED=true
-KOMGA_DATABASE_BACKUP_STARTUP=true
-KOMGA_DATABASE_BACKUP_SCHEDULE="0 0 */8 * * ?"
-
-

My changes (shown above):

- -

If you’re going to run it locally (or LAN/VPN) you can start the komga.service and access it via IP at http://<your-server-ip>:<port>(/base_url) as stated at Komga: Accessing the web interface, then you can continue with the mangal section, else continue with the next steps for the reverse proxy and certificate.

-

Reverse proxy

-

Create the reverse proxy configuration (this is for nginx). In my case I’ll use a subdomain, so this is a new config called komga.conf at the usual sites-available/enabled path:

-
server {
-    listen 80;
-    server_name komga.yourdomain.com; # change accordingly to your wanted subdomain and domain name
-
-    location / {
-        proxy_pass http://localhost:25600; # change port if needed
-        proxy_http_version 1.1;
-
-        proxy_set_header Host $host;
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-
-        proxy_read_timeout 600s;
-        proxy_send_timeout 600s;
-    }
-}
-
-

If it’s going to be used as a subdir on another domain then just change the location with /subdir instead of /; be careful with the proxy_pass directive, it has to match what you configured at /etc/komga.conf for the SERVER_SERVLET_CONTEXT_PATH regardless of the /subdir you selected at location.

-

SSL certificate

-

If using a subdir then the same certificate for the subdomain/domain should work fine and no extra stuff is needed, else if following along me then we can create/extend the certificate by running:

-
certbot --nginx
-
-

That will automatically detect the new subdomain config and create/extend your existing certificate(s). In my case I manage each certificate’s subdomain:

-
certbot --nginx -d domainname.com -d subdomain.domainname.com -d komga.domainname.com
-
-

Start using Komga

-

We can now start/enable the komga.service:

-
systemctl enable komga.service
-systemctl start komga.service
-
-

And access the web interface at https://komga.domainname.com which should show the login page for Komga. The first time it will ask to create an account as shown in Komga: Create user account, this will be an admin account. Fill in the email and password (can be changed later). The email doesn’t have to be an actual email, for now it’s just for management purposes.

-

Next thing would be to add any extra account (for read-only/download manga permissions), add/import libraries, etc.. For now I’ll leave it here until we start downloading manga on the next steps.

-

Library creation

-

Creating a library is as simple as creating a directory somewhere and point to it in Komga. The following examples are for my use case, change accordingly. I’ll be using /mnt/d/mangal for my library (as stated in the mangal: configuration section):

-
mkdir /mnt/d/mangal
-
-

Where I chose the name mangal as its the name of the downloader/scrapper, it could be anything, this is just how I like to organize stuff.

-

For the most part, the permissions don’t matter much (as long as it’s readable by the komga user) unless you want to delete some manga, then komga user also needs write permissions.

-

Then just create the library in Komga web interface (the + sign next to Libraries), choose a name “Mangal” and point to the root folder /mnt/d/mangal, then just click Next, Next and Add for the defaults (that’s how I’ve been using it so far). This is well explained at Komga: Libraries.

-

The real important part (for me) is the permissions of the /mnt/d/mangal directory, as I want to have write access for komga so I can manage from the web interface itself. It looks like it’s just a matter of giving ownership to the komga user either for owner or for group (or to all for that matter), but since I’m going to use a separate user to download manga then I need to choose carefully.

-

Set default directory permissions

-

The desired behaviour is: set komga as group ownership, set write access to group and whenever a new directory/file is created, inherit these permission settings. I found out via this stack exchange answer how to do it. So, for me:

-
chown manga-dl:komga /mnt/d/mangal # required for group ownership for komga
-chmod g+s /mnt/d/mangal # required for group permission inheritance
-setfacl -d -m g::rwx /mnt/d/mangal # default permissions for group
-setfacl -d -m o::rx /mnt/d/mangal # default permissions for other (as normal, I think this command can be excluded)
-
-

Where manga-dl is the user I created to download manga with. Optionally add -R flag to those 4 commands in case it already has subdirectories/files (this might mess file permissions, but it’s not an issue as far as I konw).

-

Checking that the permissions are set correctly (getfacl /mnt/d/mangal):

-
getfacl: Removing leading '/' from absolute path names
-# file: mnt/d/mangal
-# owner: manga-dl
-# group: komga
-# flags: -s-
-user::rwx
-group::rwx
-other::r-x
-default:user::rwx
-default:group::rwx
-default:other::r-x
-
-

You can then check by creating a new subdirectory (in /mnt/d/mangal) and it should have the same group permissions.

-

Populate manga library

-

You can now start downloading manga using mangal either manually or by running the cron/systemd/Timers and it will be detected by Komga automatically when it scans the library (once every hour according to my config). You can manually scan the library, though, by clicking on the 3 dots to the right of the library name (in Komga) and click on “Scan library files”.

-

Then you can check that the metadata is correct (once the manga is fully indexed and metadata finished building), such as title, summary, chapter count, language, tags, genre, etc., which honestly it never works fine as mangal creates the series.json with the comicId field with an upper case I and Komga expects it to be a lower case i (comicid) so it falls back to using the info from the first chapter. I’ll probably will fix this on mangal side, and see how it goes.

-

So, what I do is manually edit the metadata for the manga, by changing whatever it’s wrong or add what’s missing (I like adding anilist and MyAnimeList links) and then leave it as is. This is up to you.

-

Alternative downloaders

-

Just for the record, here is a list of downloaders/scrapers I considered before starting to use mangal:

- -

Others:

- - - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/new_blogging_system.html b/live/blog/a/new_blogging_system.html deleted file mode 100644 index 61b81d3..0000000 --- a/live/blog/a/new_blogging_system.html +++ /dev/null @@ -1,156 +0,0 @@ - - - - - - -I'm using a new blogging system -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

I'm using a new blogging system

- -

So, I was tired of working with ssg (and then sbg which was a modified version of ssg that I “wrote”), for one general reason: not being able to extend it as I would like; and not just dumb little stuff, I wanted to be able to have more control, to add tags (which another tool that I found does: blogit), and even more in a future.

-

The solution? Write a new program “from scratch” in pYtHoN. Yes it is bloated, yes it is in its early stages, but it works just as I want it to work, and I’m pretty happy so far with the results and have with even more ideas in mind to “optimize” and generally clean my wOrKfLoW to post new blog entries. I even thought of using it for posting into a “feed” like gallery for drawings or pictures in general.

-

I called it pyssg, because it sounds nice and it wasn’t taken in the PyPi. It is just a terminal program that reads either a configuration file or the options passed as flags when calling the program.

-

It still uses Markdown files because I find them very easy to work with. And instead of just having a “header” and a “footer” applied to each parsed entry, you will have templates (generated with the program) for each piece that I thought made sense (idea taken from blogit): the common header and footer, the common header and footer for each entry and, header, footer and list elements for articles and tags. When parsing the Markdown file these templates are applied and stitched together to make a single HTML file. Also generates an RSS feed and the sitemap.xml file, which is nice.

-

It might sound convoluted, but it works pretty well, with of course room to improve; I’m open to suggestions, issue reporting or direct contributions here. For now, it is only tested on Linux (and don’t think on making it work on windows, but feel free to do PR for the compatibility).

-

That’s it for now, the new RSS feed is available here: https://blog.luevano.xyz/rss.xml.

-

Update: Since writing this entry, pyssg has evolved quite a bit, so not everything described here is still true. For the latest updates check the newest entries or the git repository itself.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/password_manager_authenticator_setup.html b/live/blog/a/password_manager_authenticator_setup.html deleted file mode 100644 index 8f17596..0000000 --- a/live/blog/a/password_manager_authenticator_setup.html +++ /dev/null @@ -1,160 +0,0 @@ - - - - - - -My setup for a password manager and MFA authenticator -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

My setup for a password manager and MFA authenticator

- -

Disclaimer: I won’t go into many technical details here of how to install/configure/use the software, this is just supposed to be a short description on my setup.

-

It’s been a while since I started using a password manager at all, and I’m happy that I started with KeePassXC (open source, multiplatform password manager that it’s completely offline) as a direct recommendation from EL ELE EME; before this I was using the same password for everything (like a lot of people), which is a well know privacy issue as noted in detail by Leo (I don’t personally recommed LastPass as Leo does). Note that you will still need a master password to lock/unlock your password database (you can additionally use a hardware key and a key file).

-

Anyways, setting up keepass is pretty simple, as there is a client for almost any device; note that keepass is basically just the format and the base for all of the clients, as its common with pretty much any open source software. In my case I’m using KeePassXC in my computer and KeePassDX in my phone (Android). The only concern is keeping everything in sync because keepass doesn’t have any automatic method of synchronizing between devices because of security reasons (as far as I know), meaning that you have to manage that yourself.

-

Usually you can use something like G**gl* drive, dropbox, mega, nextcloud, or any other cloud solution that you like to sync your keepass database between devices; I personally prefer to use Syncthing as it’s open source, it’s really easy to setup and has worked wonders for me since I started using it, also it keeps versions of your files that can serve as backups in any scenario where the database gets corrupted or something.

-

Finally, when I went through the issue with the micro SD and the adoptable storage bullshit (you can find the rant here, in spanish) I had to also migrate from G**gl* authenticator (gauth) to something else for the simple reason that gauth doesn’t even let you do backups, nor it’s synched with your account… nothing, it is just standalone and if you ever lose your phone you’re fucked; so I decided to go with Aegis authenticator, as it is open source, you have control over all your secret keys, you can do backups directly to the filesystem, you can secure your database with an extra password, etc., etc.. In general aegis is the superior MFA authenticator (at least compared with gauth) and everything that’s compatible with gauth is compatible with aegis as the format is a standard (as a matter of fact, keepass also has this MFA feature which is called TOPT and is also compatible, but I prefer to have things separate). I also use syncthing to keep a backup of my aegis database.

-

TL;DR:

- - - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/pastebin_alt_with_privatebin.html b/live/blog/a/pastebin_alt_with_privatebin.html deleted file mode 100644 index ef62906..0000000 --- a/live/blog/a/pastebin_alt_with_privatebin.html +++ /dev/null @@ -1,401 +0,0 @@ - - - - - - -Set up a pastebin alternative with PrivateBin and YOURLS -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Set up a pastebin alternative with PrivateBin and YOURLS

- -

I learned about PrivateBin a few weeks back and ever since I’ve been looking into installing it, along with a URL shortener (a service I wanted to self host since forever). It took me a while as I ran into some problems while experimenting and documenting all the necessary bits in here.

-

My setup is exposed to the public, and as always is heavily based on previous entries as described in Prerequisites. Descriptions on setting up MariaDB (preferred MySQL replacement for Arch) and PHP are written in this entry as this is the first time I’ve needed them.

-

Everything here is performed in arch btw and all commands should be run as root unless stated otherwise.

-

Table of contents

- -

Prerequisites

-

If you want to expose to a (sub)domain, then similar to my early tutorial entries (specially the website for the reverse proxy plus certificates):

- -

MariaDB

-

MariaDB is a drop-in replacement of MySQL.

-

Install the mariadb package:

-
pacman -S mariadb
-
-

Before starting/enabling the systemd service run:

-
mariadb-install-db --user=mysql --basedir=/usr --datadir=/var/lib/mysql
-
-

start/enable the mariadb.service:

-
systemctl start mariadb.service
-systemctl enable mariadb.service
-
-

Run and follow the secure installation script before proceding any further:

-
mariadb-secure-installation
-
-

Change the binding address so the service listens on localhost only by modifying /etc/my.cnf.d/server.cnf:

-
[mariadb]
-bind-address = localhost
-
-

Create users/databases

-

To use mariadb simply run the command and it will try to login with the corresponding linux user running it. The general login command is:

-
mariadb -u <username> -p <database_name>
-
-

The database_name is optional. It will prompt a password input field.

-

Using mariadb as root, create users with their respective database if needed with the following queries:

-
MariaDB> CREATE USER '<username>'@'localhost' IDENTIFIED BY '<password>';
-MariaDB> CREATE DATABASE <database_name>;
-MariaDB> GRANT ALL PRIVILEGES ON <database_name>.* TO '<username>'@'localhost';
-MariaDB> quit
-
-

The database_name will depend on how YOURLS and PrivateBin are configured, that is if the services use a separate database and/or table prefixes are used.

-

PHP

-

PHP is a general-purpose scripting language that is usually used for web development, which was supposed to be ass for a long time but it seems to be a misconseption from the old times.

-

Install the php, php-fpm, php-gd packages:

-
pacman -S php php-fpm php-gd
-
-

start/enable the php-fpm.service:

-
systemctl start php-fpm.service
-systemctl enable php-fpm.service
-
-

Configuration

-

Only showing changes needed, main config file is located at /etc/php/php.ini, or drop-in files can be placed at /etc/php/conf.d/ instead.

-

Set timezone (list of timezones):

-
date.timezone = Europe/Berlin
-
-

Enable the gd and mysql extensions:

-
extension=gd
-extension=pdo_mysql
-extension=mysqli
-
-

Nginx

-

Create a PHP specific config that can be reusable at /etc/nginx/php_fastcgi.conf:

-
location ~ \.php$ {
-    # required for yourls
-    add_header Access-Control-Allow-Origin $http_origin;
-
-    # 404
-    try_files $fastcgi_script_name =404;
-
-    # default fastcgi_params
-    include fastcgi_params;
-
-    # fastcgi settings
-    fastcgi_pass                        unix:/run/php-fpm/php-fpm.sock;
-    fastcgi_index                       index.php;
-    fastcgi_buffers                     8 16k;
-    fastcgi_buffer_size         32k;
-
-    # fastcgi params
-    fastcgi_param DOCUMENT_ROOT $realpath_root;
-    fastcgi_param SCRIPT_FILENAME       $realpath_root$fastcgi_script_name;
-    #fastcgi_param PHP_ADMIN_VALUE      "open_basedir=$base/:/usr/lib/php/:/tmp/";
-}
-
-

This then can be imported by any server directive that needs it.

-

YOURLS

-

YOURLS is a self-hosted URL shortener that is supported by PrivateBin.

-

Install from the AUR with yay:

-
yay -S yourls
-
-

Create a new user and database as described in MariaDB: Create users/databases.

-

Configuration

-

The default configuration file is self explanatory, it is located at /etc/webapps/yourls/config.php. Make sure to correctly set the user/database YOURLS will use and either create a cookie or get one from URL provided.

-

It is important to change the $yours_user_passwords variable, YOURLS will hash the passwords on login so it is not stored in plaintext. Password hashing can be disabled with:

-
define( 'YOURLS_NO_HASH_PASSWORD', true );
-
-

I also changed the “shortening method” to 62 to include more characters:

-
define( 'YOURLS_URL_CONVERT', 62 );
-
-

The $yourls_reserved_URL variable will need more blacklisted words depending on the use-case. Make sure the YOURLS_PRIVATE variable is set to true (default) if the service will be exposed to the public.

-

Nginx

-

Create a yourls.conf at the usual sites-<available/enabled> path for nginx:

-
server {
-    listen 80;
-    root /usr/share/webapps/yourls/;
-    server_name short.example.com;
-    index index.php;
-
-    location / {
-        try_files $uri $uri/ /yourls-loader.php$is_args$args;
-    }
-
-    include /etc/nginx/php_fastcgi.conf;
-}
-
-

Make sure the following header is included in the php‘s nginx location block described in YOURLS: Nginx:

-
add_header Access-Control-Allow-Origin $http_origin;
-
-

SSL certificate

-

Create/extend the certificate by running:

-
certbot --nginx
-
-

Restart the nginx service for changes to take effect:

-
systemctl restart nginx.service
-
-

Usage

-

The admin area is located at https://short.example.com/admin/, login with any of the configured users set with the $yours_user_passwords in the config. Activate plugins by going to the “Manage Plugins” page (located at the top left) and clicking in the respective “Activate” button by hovering the “Actin” column, as shown below:

-
-YOURLS: Activate plugin -
YOURLS: Activate plugin
-
-

I personally activated the “Random ShortURLs” and “Allow Hyphens in Short URLs”. Once the “Random ShortURLs” plugin is activated it can be configured by going to the “Random ShortURLs Settings” page (located at the top left, right below “Manage Plugins”), only config available is “Random Keyword Length”.

-

The main admin area can be used to manually shorten any link provided, by using the automatic shortening or by providing a custom short URL.

-

Finally, the “Tools” page (located at the top left) conains the signature token, used for YOURLS: Passwordless API as well as useful bookmarklets for URL shortening while browsing.

-

PrivateBin

-

PrivateBin is a minimalist self-hosted alternative to pastebin.

-

Install from the AUR with yay:

-
yay -S privatebin
-
-

Create a new user and database as described in MariaDB: Create users/databases.

-

Configuration

-

This heavily depends on personal preference, all defaults are fine. Make a copy of the sample config template:

-
cp /etc/webapps/privatebin/conf.sample.php /etc/webapps/privatebin/conf.php
-
-

The most important changes needed are basepath according to the privatebin URL and the [model] and [model_options] to use MySQL instead of plain filesystem files:

-
[model]
-; example of DB configuration for MySQL
-class = Database
-[model_options]
-dsn = "mysql:host=localhost;dbname=privatebin;charset=UTF8"
-tbl = "privatebin_"     ; table prefix
-usr = "privatebin"
-pwd = "<password>"
-opt[12] = true    ; PDO::ATTR_PERSISTENT
-
-

Any other [model] or [model_options] needs to be commented out (for example, the default filesystem setting).

-

YOURLS integration

-

I recommend creating a separate user for privatebin in yourls by modifying the $yours_user_passwords variable in yourls config file. Then login with this user and get the signature from the “Tools” section in the admin page, for more: YOURLS: Passwordless API.

-

For a “private” yourls installation (that needs username/pasword), set urlshortener:

-
urlshortener = "https://short.example.com/yourls-api.php?signature=xxxxxxxxxx&action=shorturl&format=json&url="
-
-

Note that this will expose the signature in the HTTP requests and anybody with the signature can use it to shorten external URLs.

-

Nginx

-

To deny access to some bots/crawlers, PrivateBin provides a sample .htaccess, which is used in Apache. We need an Nginx version, which I found here.

-

Add the following at the beginning of the http block of the /etc/nginx/nginx.conf file:

-
http {
-    map $http_user_agent $pastebin_badagent {
-        ~*bot 1;
-        ~*spider 1;
-        ~*crawl 1;
-        ~https?:// 1;
-        WhatsApp 1;
-        SkypeUriPreview 1;
-        facebookexternalhit 1;
-    }
-
-    #...
-}
-
-

Create a privatebin.conf at the usual sites-<available/enabled> path for nginx:

-
server {
-    listen 80;
-    root //usr/share/webapps/privatebin/;
-    server_name bin.example.com;
-    index index.php;
-
-    if ($pastebin_badagent) {
-       return 403;
-    }
-
-    location / {
-        try_files $uri $uri/ /index.php$is_args$args;
-    }
-
-    include /etc/nginx/php_fastcgi.conf;
-}
-
-

SSL certificate

-

Create/extend the certificate by running:

-
certbot --nginx
-
-

Restart the nginx service for changes to take effect:

-
systemctl restart nginx.service
-
- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/rewrote_pyssg_again.html b/live/blog/a/rewrote_pyssg_again.html deleted file mode 100644 index b871a0f..0000000 --- a/live/blog/a/rewrote_pyssg_again.html +++ /dev/null @@ -1,152 +0,0 @@ - - - - - - -Rewrote pyssg again -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Rewrote pyssg again

- -

I’ve been wanting to change the way pyssg reads config files and generates HTML files so that it is more flexible and I don’t need to have 2 separate build commands and configs (for blog and art), and also to handle other types of “sites”; because pyssg was built with blogging in mind, so it was a bit limited to how it could be used. So I had to kind of rewrite pyssg, and with the latest version I can now generate the whole site and use the same templates for everything, quite neat for my use case.

-

Anyways, so I bought a new domain for all pyssg related stuff, mostly because I wanted somewhere to test live builds while developing, it is of course pyssg.xyz; as of now it is the same template, CSS and scripts that I use here, probably will change in the future. I’ll be testing new features and anything pyssg related stuff.

-

I should start pointing all links to pyssg to the actual site instead of the github repository (or my git repository), but I haven’t decided how to handle everything.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/tenia_esto_descuidado.html b/live/blog/a/tenia_esto_descuidado.html deleted file mode 100644 index b379405..0000000 --- a/live/blog/a/tenia_esto_descuidado.html +++ /dev/null @@ -1,154 +0,0 @@ - - - - - - -Tenía este pex algo descuidado -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Tenía este pex algo descuidado

- -

Así es, tenía un poco descuidado este pex, siendo la razón principal que andaba ocupado con cosas de la vida profesional, ayay. Pero ya que ando un poco más despejado y menos estresado voy a seguir usando el blog y a ver qué más hago.

-

Tengo unas entradas pendientes que quiero hacer del estilo de “tutorial” o “how-to”, pero me lo he estado debatiendo, porque Luke ya empezó a hacerlo más de verdad en landchad.net, lo cual recomiendo bastante pues igual yo empecé a hacer esto por él (y por EL ELE EME); aunque la verdad pues es muy específico a como él hace las cosas y quizá sí puede haber diferencias, pero ya veré en estos días. La próxima que quiero hacer es sobre el VPN, porque no lo he setupeado desde que reinicié El Página Web y La Servidor, entonces acomodaré el VPN de nuevo y de pasada tiro entrada de eso.

-

También dejé un dibujo pendiente, que la neta lo dejé por 2 cosas: está bien cabrón (porque también lo quiero colorear) y porque estaba ocupado; de lo cuál ya sólo queda el está bien cabrón pero no he tenido el valor de retomarlo. Lo triste es que ya pasó el tiempo del hype y ya no tengo mucha motivación para terminarlo más que el hecho de que cuando lo termine empezaré a usar Clip Studio Paint en vez de Krita, porque compré una licencia ahora que estuvo en 50% de descuento.

-

Algo bueno es que me he estado sintiendo muy bien conmigo mismo últimamente, aunque casi no hable de eso. Sí hay una razón en específico, pero es una razón algo tonta. Espero así siga.

-

Ah, y también quería acomodarme una sección de comentarios, pero como siempre, todas las opciones están bien bloated, entonces pues me voy a hacer una en corto seguramente en Python para el back, MySQL para la base de datos y Javascript para la conexión acá en el front, algo tranqui. Nel, siempre no ocupo esto, pa’ qué.

-

Sobres pues.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/torrenting_with_qbittorrent.html b/live/blog/a/torrenting_with_qbittorrent.html deleted file mode 100644 index 8cd9dae..0000000 --- a/live/blog/a/torrenting_with_qbittorrent.html +++ /dev/null @@ -1,411 +0,0 @@ - - - - - - -Set up qBitTorrent with Jackett for use with Starr apps -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Set up qBitTorrent with Jackett for use with Starr apps

- -

Riding on my excitement of having a good internet connection and having setup my home server now it’s time to self host a media server for movies, series and anime. I’ll setup qBitTorrent as the downloader, Jackett for the trackers, the Starr apps for the automatic downloading and Jellyfin as the media server manager/media viewer. This was going to be a single entry but it ended up being a really long one so I’m splitting it, this being the first part.

-

I’ll be exposing my stuff on a subdomain only so I can access it while out of home and for SSL certificates (not required), but shouldn’t be necessary and instead you can use a VPN (how to set up). For your reference, whenever I say “Starr apps” (*arr apps) I mean the family of apps such as Sonarr, Radarr, Bazarr, Readarr, Lidarr, etc..

-

Most of my config is based on the TRaSH-Guides (will be mentioned as “TRaSH” going forward). Specially get familiar with the TRaSH: Native folder structure and with the TRaSH: Hardlinks and instant moves. Will also use the default configurations based on the respective documentation for each Starr app and service, except when stated otherwise.

-

Everything here is performed in arch btw and all commands should be run as root unless stated otherwise.

-

Kindly note that I do not condone the use of torrenting for illegal activities. I take no responsibility for what you do when setting up anything shown here. It is for you to check your local laws before using automated downloaders such as Sonarr and Radarr.

-

Table of contents

- -

Prerequisites

-

The specific programs are mostly recommendations, if you’re familiar with something else or want to change things around, feel free to do so but everything will be written with them in mind.

-

If you want to expose to a (sub)domain, then similar to my early tutorial entries (specially the website for the reverse proxy plus certificates):

- -

Note: I’m using the explicit 127.0.0.1 ip instead of localhost in the reverse proxies/services config as localhost resolves to ipv6 sometimes which is not configured on my server correctly. If you have it configured you can use localhost without any issue.

-

Directory structure

-

Basically following TRaSH: Native folder structure except for the directory permissions part, I’ll do the same as with my Komga setup guide to stablish default group permissions.

-

The desired behaviour is: set servarr as group ownership, set write access to group and whenever a new directory/file is created, inherit these permission settings. servarr is going to be a service user and I’ll use the root of a mounted drive at /mnt/a.

-
    -
  1. Create a service user called servarr (it could just be a group, too):
  2. -
-
useradd -r -s /usr/bin/nologin -M -c "Servarr applications" servarr
-
-
    -
  1. Create the torrents directory and set default permissions:
  2. -
-
cd /mnt/a # change this according to your setup
-mkdir torrents
-chown servarr:servarr torrents
-chmod g+w torrents
-chmod g+s torrents
-setfacl -d -m g::rwx torrents
-setfacl -d -m o::rx torrents
-
-
    -
  1. Check that the permissions are set correctly (getfacl torrents)
  2. -
-
# file: torrents/
-# owner: servarr
-# group: servarr
-# flags: -s-
-user::rwx
-group::rwx
-other::r-x
-default:user::rwx
-default:group::rwx
-default:other::r-x
-
-
    -
  1. Create the subdirectories you want with any user (I’ll be using servarr personally):
  2. -
-
mkdir torrents/{tv,movies,anime}
-chown -R servarr: torrents
-
-
    -
  1. Finally repeat steps 2 - 4 for the media directory.
  2. -
-

The final directory structure should be the following:

-
root_dir
-├── torrents
-│   ├── movies
-│   ├── music
-│   └── tv
-└── media
-    ├── movies
-    ├── music
-    └── tv
-
-

Where root_dir is /mnt/a in my case. This is going to be the reference for the following applications set up.

-

Later, add the necessary users to the servarr group if they need write access, by executing:

-
gpasswd -a <USER> servarr
-
-

Jackett

-

Jackett is a “proxy server” (or “middle-ware”) that translates queries from apps (such as the Starr apps in this case) into tracker-specific http queries. Note that there is an alternative called Prowlarr that is better integrated with most if not all Starr apps, requiring less maintenance; I’ll still be sticking with Jackett, though.

-

Install from the AUR with yay:

-
yay -S jackett
-
-

I’ll be using the default 9117 port, but change accordingly if you decide on another one.

-

Reverse proxy

-

I’m going to have most of the services under the same subdomain, with different subdirectories. Create the config file isos.conf at the usual sites-available/enabled path for nginx:

-
server {
-    listen 80;
-    server_name isos.yourdomain.com;
-
-    location /jack { # you can change this to jackett or anything you'd like, but it has to match the jackett config on the next steps
-        proxy_pass http://127.0.0.1:9117; # change the port according to what you want
-
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_set_header X-Forwarded-Host $http_host;
-        proxy_redirect off;
-    }
-}
-
-

This is the basic reverse proxy config as shown in Jackett: Running Jackett behind a reverse proxy. The rest of the services will be added under different location block on their respective steps.

-

SSL certificate

-

Create/extend the certificate by running:

-
certbot --nginx
-
-

That will automatically detect the new subdomain config and create/extend your existing certificate(s). Restart the nginx service for changes to take effect:

-
systemctl restart nginx.service
-
-

Start using Jackett

-

You can now start/enable the jackett.service:

-
systemctl enable jackett.service
-systemctl start jackett.service
-
-

It will autocreate the default configuration under /var/lib/jackett/ServerConfig.json, which you need to edit at least to change the BasePathOverride to match what you used in the nginx config:

-
{
-    "Port": 9117,
-    "SomeOtherConfigs": "some_other_values",
-    "BasePathOverride": "/jack",
-    "MoreConfigs": "more_values",
-}
-
-

Also modify the Port if you changed it. Restart the jackett service:

-
systemctl restart jackett.service
-
-

It should now be available at https://isos.yourdomain.com/jack. Add an admin password right away by scroll down and until the first config setting; don’t forget to click on “Set Password”. Change any other config you want from the Web UI, too (you’ll need to click on the blue “Apply server settings” button).

-

Note that you need to set the “Base URL override” to http://127.0.0.1:9117 (or whatever port you used) so that the “Copy Torznab Feed” button works for each indexer.

-

Indexers

-

For Jackett, an indexer is just a configured tracker for some of the commonly known torrent sites. Jackett comes with a lot of pre-configured public and private indexers which usually have multiple URLs (mirrors) per indexer, useful when the main torrent site is down. Some indexers come with extra features/configuration depending on what the site specializes on.

-

To add an indexer click on the “+ Add Indexer” at the top of the Web UI and look for indexers you want, then click on the “+” icon on the far-most right for each indexer or select the ones you want (clicking on the checkbox on the far-most left of the indexer) and scroll all the way to the bottom to click on “Add Selected”. They then will show as a list with some available actions such as “Copy RSS Feed”, “Copy Torznab Feed”, “Copy Potato Feed”, a button to search, configure, delete and test the indexer, as shown below:

-
-Jacket: configured indexers -
Jackett: configured indexers
-
-

You can manually test the indexers by doing a basic search to see if they return anything, either by searching on each individual indexer by clicking on the magnifying glass icon on the right of the indexer or clicking on “Manual Search” button which is next to the “+ Add Indexer” button at the top right.

-

Explore each indexer’s configuration in case there is stuff you might want to change.

-

FlareSolverr

-

FlareSolverr is used to bypass certain protection that some torrent sites have. This is not 100% necessary and only needed for some trackers sometimes, it also doesn’t work 100%.

-

You could install from the AUR with yay:

-
yay -S flaresolverr-bin
-
-

At the time of writing, the flaresolverr package didn’t work for me because of python-selenium. flaresolverr-bin was updated around the time I was writing this, so that is what I’m using and what’s working fine so far, it contains almost everything needed (it has self contained libraries) except for xfvb.

-

Install dependencies via pacman:

-
pacman -S xorg-server-xvfb
-
-

You can now start/enable the flaresolverr.service:

-
systemctl enable flaresolverr.service
-systemctl start flaresolverr.service
-
-

Verify that the service started correctly by checking the logs:

-
journalctl -fxeu flaresolverr
-
-

It should show “Test successful” and “Serving on http://0.0.0.0:8191” (which is the default). Jackett now needs to be configured by adding http://127.0.0.1:8191 almost at the end in the “FlareSolverr API URL” field, then click on the blue “Apply server settings” button at the beginning of the config section. This doesn’t need to be exposed or anything, it’s just an internal API that Jackett (or anything you want) will use.

-

qBitTorrent

-

qBitTorrent is a fast, stable and light BitTorrent client that comes with many features and in my opinion it’s a really user friendly client and my personal choice for years now. But you can choose whatever client you want, there are more lightweight alternatives such as Transmission.

-

Install the qbittorrent-nox package (“nox” as in “no X server”):

-
pacman -S qbittorrent-nox
-
-

By default the package doesn’t create any (service) user, but it is recommended to have one so you can run the service under it. Create the user, I’ll call it qbittorrent and leave it with the default homedir (/home):

-
useradd -r -m qbittorrent
-
-

Add the qbittorrent user to the servarr group:

-
gpasswd -a qbittorrent servarr
-
-

Decide a port number you’re going to run the service on for the next steps, the default is 8080 but I’ll use 30000; it doesn’t matter much, as long as it matches for all the config. This is the qbittorrent service port, it is used to connect to the instance itself through the Web UI or via API, you also need to open a port for listening to peer connections. Choose any port you want, for example 50000 and open it with your firewall, ufw in my case:

-
ufw allow 50000/tcp comment "qBitTorrent - Listening port"
-
-

Reverse proxy

-

Add the following location block into the isos.conf with whatever subdirectory name you want, I’ll call it qbt:

-
location /qbt/ {
-    proxy_pass http://localhost:30000/; # change port to whatever number you want
-    proxy_http_version 1.1;
-
-    proxy_set_header Host $host;
-    proxy_set_header X-Forwarded-Host $http_host;
-    proxy_set_header X-Forwarded-For $remote_addr;
-
-    proxy_cookie_path / "/; Secure";
-    proxy_read_timeout 600s;
-    proxy_send_timeout 600s;
-}
-
-

This is taken from qBitTorrent: Nginx reverse proxy for Web UI. Restart the nginx service for the changes to take effect:

-
systemctl restart nginx.service
-
-

Start using qBitTorrent

-

You can now start/enable the qbittorrent-nox@.service using the service account created (qbittorrent):

-
systemctl enable `qbittorrent-nox@qbittorrent.service
-systemctl start `qbittorrent-nox@qbittorrent.service
-
-

This will start qbittorrent using default config. You need to change the port (in my case to 30000) and set qbittorrent to restart on exit (the Web UI has a close button). I guess this can be done before enabling/starting the service, but either way let’s create a “drop-in” file by “editing” the service:

-
systemctl edit `qbittorrent-nox@qbittorrent.service
-
-

Which will bring up a file editing mode containing the service unit and a space where you can add/override anything, add:

-
[Service]
-Environment="QBT_WEBUI_PORT=30000" # or whatever port number you want
-Restart=on-success
-RestartSec=5s
-
-

When exiting from the file (if you wrote anything) it will create the override config. Restart the service for changes to take effect (you might be asked to reload the systemd daemon):

-
systemctl restart `qbittorrent-nox@qbittorrent.service
-
-

You can now head to https://isos.yourdomain.com/qbt/ and login with user admin and password adminadmin (by default). Change the default password right away by going to Tools -> Options -> Web UI -> Authentication. The Web UI is basically the same as the normal desktop UI so if you’ve used it it will feel familiar. From here you can threat it as a normal torrent client and even start using for other stuff other than the specified here.

-

Configuration

-

It should be usable already but you can go further and fine tune it, specially to some kind of “convention” as shown in TRaSH: qBitTorrent basic setup and subsequent qbittorrent guides.

-

I use all the suggested settings by TRaSH, where the only “changes” are for personal paths, ports, and in general connection settings that depend on my setup. The only super important setting I noticed that can affect our setup (meaning what is described in this entry) is the Web UI -> Authentication for the “Bypass authentication for clients on localhost”. This will be an issue because the reverse proxy is accessing qbittorrent via localhost, so this will make the service open to the world, experiment at your own risk.

-

Finally, add categories by following TRaSH: qBitTorrent how to add categories, basically right clicking on Categories -> All (x) (located at the left of the Web UI) and then on “Add category”; I use the same “Category” and “Save Path” (tv and tv, for example), where the “Save Path” will be a subdirectory of the configured global directory for torrents (TRaSH: qBitTorent paths and categories breakdown). I added 3: tv, movies and anime.

-

Trackers

-

Often some of the trackers that come with torrents are dead or just don’t work. You have the option to add extra trackers to torrents either by:

- -

On both options, the list of trackers need to have at least one new line in between each new tracker. You can find trackers from the following sources:

- -

Both sources maintain an updated list of trackers. You also might need to enable an advanced option so all the new trackers are contacted (Only first tracker contacted): configure at Tools -> Options -> Advanced -> libtorrent Section and enable both “Always announce to all tiers” and “Always announce to all trackers in a tier”.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/updated_pyssg_pymdvar_and_website.html b/live/blog/a/updated_pyssg_pymdvar_and_website.html deleted file mode 100644 index 291a170..0000000 --- a/live/blog/a/updated_pyssg_pymdvar_and_website.html +++ /dev/null @@ -1,152 +0,0 @@ - - - - - - -Updated pyssg to include pymdvar and the website -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Updated pyssg to include pymdvar and the website

- -

Again, I’ve updated pyssg to add a bit of unit-testing as well as to include my extension pymdvar which is used to convert ${some_variables} into their respective values based on a config file and/or environment variables. With this I also updated a bit of the CSS of the site as well as basically all the entries and base templates, a much needed update (for me, because externally doesn’t look like much). Along with this I also added a “return to top” button, once you scroll enough on the site, a new button appears on the bottom right to get back to the top, also added table of contents to entries taht could use them (as well as a bit of CSS to them).

-

This update took a long time because I had a fundamental issue with how I was managing the “static” website, where I host all assets such as CSS, JS, images, etc.. Because I was using the <base> HTML tag. The issue is that this tag affects everything and there is no “opt-out” on some body tags, meaning that I would have to write the whole URL for all static assets. So I tried looking into changing how the image extension for python-markdown works, so that it includes this “base” URL I needed. But it was too much hassle, so I ended up developing my own extension mentioned earlier. Just as a side note, I noticed that my extension doesn’t cover all my needs, so probably it wont cover yours, if you end up using it just test it out a bit yourself and then go ahead, PRs are welcomed.

-

One thing led to another so I ended up changing a lot of stuff, and with changes comes tireness and eded up leaving the project for a while (again). This also led to not wanting to write or add anything else to the site until I sorted things out. But I’m again reviving it I guess, and up to the next cycle.

-

The next things I’ll be doing are continuing with my @gamedev journey and probably upload some drawings if I feel like doing some.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/updating_creating_entries_titles_to_setup.html b/live/blog/a/updating_creating_entries_titles_to_setup.html deleted file mode 100644 index ea561cf..0000000 --- a/live/blog/a/updating_creating_entries_titles_to_setup.html +++ /dev/null @@ -1,149 +0,0 @@ - - - - - - -Updated the how-to entries titles -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Updated the how-to entries titles

- -

One of the main reasons I started “blogging” was basically just to document how I set up stuff up so I can reference them later in the future if I ever needed to replicate the steps or just to show somebody, and these entries had helped to do so multiple times. I’ll keep creating these entries but after a while the Creating a title started to feel weird, because we’re not creating anything really, it is just a set up/configuration/how-to/etc. So I think that using Set up a for the titles is better and makes more sense; probably using How to set up a is better for the SEO bullshit.

-

Anyways, so I’ll start using Set up a instead of Creating a and will retroactively change the titles for these entries (by this entry the change should be applied already). This might impact some RSS feeds as they keep up a cache of the feed and might duplicate the entries, heads up if for some reason somebody is using it.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/volviendo_a_usar_la_pagina.html b/live/blog/a/volviendo_a_usar_la_pagina.html deleted file mode 100644 index 0c713ca..0000000 --- a/live/blog/a/volviendo_a_usar_la_pagina.html +++ /dev/null @@ -1,152 +0,0 @@ - - - - - - -Volviendo a usar la página -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Volviendo a usar la página

- -

Después de mucho tiempo de estar luchando con querer volver a usar este pex (maldita d word y demás), ya me volví a acomodar el setup para agregar nuevas entradas.

-

Entre las cosas que tuve que hacer fue actualizar el pyssg porque no lo podía usar de una como estaba; y de pasado le agregue una que otra feature nueva. Luego quiero agregarle más funcionalidad para poder buildear la página completa; por ahora se hace en segmentos: todo lo de luevano.xyz está hecho manual, mientras que blog y art usan pyssg.

-

Otra cosa es que quizá me devuelva a editar alguans entradas nada más para homogeneizar las entradas específicas a Create a… (tiene más sentido que sean Setup x… o algo similar).

-

En otras noticias, estoy muy agusto en el jale que tengo actualmente aunque lleve alrededor de 3 semanas de un infierno en el jale. Debo pensar en si debo omitir cosas personales o del trabajo aquí, ya que quién sabe quién se pueda llegar a topar con esto *thinking emoji*.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/vpn_server_with_openvpn.html b/live/blog/a/vpn_server_with_openvpn.html deleted file mode 100644 index 8456352..0000000 --- a/live/blog/a/vpn_server_with_openvpn.html +++ /dev/null @@ -1,446 +0,0 @@ - - - - - - -Set up a VPN server with OpenVPN -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Set up a VPN server with OpenVPN

- -

I’ve been wanting to do this entry, but had no time to do it since I also have to set up the VPN service as well to make sure what I’m writing makes sense, today is the day.

-

Like with any other of my entries I based my setup on the Arch Wiki, this install script and this profile generator script.

-

This will be installed and working alongside the other stuff I’ve wrote about on other posts (see the server tag). All commands here are executes as root unless specified otherwise. Also, this is intended only for IPv4 (it’s not that hard to include IPv6, but meh). As always, all commands are executed as root unless stated otherwise.

-

Table of contents

- -

Prerequisites

-

Pretty simple:

- -

Create PKI from scratch

-

PKI stands for Public Key Infrastructure and basically it’s required for certificates, private keys and more. This is supposed to work between two servers and one client: a server in charge of creating, signing and verifying the certificates, a server with the OpenVPN service running and the client making the request.

-

In a nutshel, this is supposed to work something like: 1) a client wants to use the VPN service, so it creates a requests and sends it to the signing server, 2) this server checks the requests and signs the request, returning the certificates to both the VPN service and the client and 3) the client can now connect to the VPN service using the signed certificate which the OpenVPN server knows about.

-

That’s how the it should be st up… but, to be honest, all of this is a hassle and (in my case) I want something simple to use and manage. So I’m gonna do all on one server and then just give away the configuration file for the clients, effectively generating files that anyone can run and will work, meaning that you need to be careful who you give this files (it also comes with a revoking mechanism, so no worries).

-

This is done with Easy-RSA.

-

Install the easy-rsa package:

-
pacman -S easy-rsa
-
-

Initialize the PKI and generate the CA keypair:

-
cd /etc/easy-rsa
-easyrsa init-pki
-easyrsa build-ca nopass
-
-

Create the server certificate and private key (while in the same directory):

-
EASYRSA_CERT_EXPIRE=3650 easyrsa build-server-full server nopass
-
-

Where server is just a name to identify your server certificate keypair, I just use server but could be anything (like luevano.xyz in my case).

-

Create the client revocation list AKA CRL (will be used later, but might as well have it now):

-
EASYRSA_CRL_DAYS=3650 easyrsa gen-crl
-
-

After this we should have 6 new files:

-
/etc/easy-rsa/pki/ca.crt
-/etc/easy-rsa/pki/private/ca.key
-/etc/easy-rsa/pki/issued/server.crt
-/etc/easy-rsa/pki/reqs/server.req
-/etc/easy-rsa/pki/private/server.key
-/etc/easy-rsa/pki/crl.pem
-
-

It is recommended to copy some of these files over to the openvpn directory, but I prefer to keep them here and just change some of the permissions:

-
chmod o+rx pki
-chmod o+rx pki/ca.crt
-chmod o+rx pki/issued
-chmod o+rx pki/issued/server.crt
-chmod o+rx pki/private
-chmod o+rx pki/private/server.key
-chown nobody:nobody pki/crl.pem
-chmod o+r pki/crl.pem
-
-

Finally, go to the openvpn directory and create the required files there:

-
cd /etc/openvpn/server
-openssl dhparam -out dh.pem 2048
-openvpn --genkey secret ta.key
-
-

OpenVPN

-

OpenVPN is a robust and highly flexible VPN daemon, that’s pretty complete feature-wise.

-

Install the openvpn package:

-
pacman -S openvpn
-
-

Now, most of the stuff is going to be handled by (each, if you have more than one) server configuration. This might be the hardest thing to configure, but I’ve used a basic configuration file that worked a lot to me, which is a compilation of stuff that I found on the internet while configuring the file a while back.

-
# Server ip addres (ipv4).
-local 1.2.3.4 # your server public ip
-
-# Port.
-port 1194 # Might want to change it to 443
-
-# TCP or UDP.
-;proto tcp
-proto udp # If ip changes to 443, you should change this to tcp, too
-
-# "dev tun" will create a routed IP tunnel,
-# "dev tap" will create an ethernet tunnel.
-;dev tap
-dev tun
-
-# Server specific certificates and more.
-ca /etc/easy-rsa/pki/ca.crt
-cert /etc/easy-rsa/pki/issued/server.crt
-key /etc/easy-rsa/pki/private/server.key  # This file should be kept secret.
-dh /etc/openvpn/server/dh.pem
-auth SHA512
-tls-crypt /etc/openvpn/server/ta.key 0 # This file is secret.
-crl-verify /etc/easy-rsa/pki/crl.pem
-
-# Network topology.
-topology subnet
-
-# Configure server mode and supply a VPN subnet
-# for OpenVPN to draw client addresses from.
-server 10.8.0.0 255.255.255.0
-
-# Maintain a record of client <-> virtual IP address
-# associations in this file.
-ifconfig-pool-persist ipp.txt
-
-# Push routes to the client to allow it
-# to reach other private subnets behind
-# the server.
-;push "route 192.168.10.0 255.255.255.0"
-;push "route 192.168.20.0 255.255.255.0"
-
-# If enabled, this directive will configure
-# all clients to redirect their default
-# network gateway through the VPN, causing
-# all IP traffic such as web browsing and
-# and DNS lookups to go through the VPN
-push "redirect-gateway def1 bypass-dhcp"
-
-# Certain Windows-specific network settings
-# can be pushed to clients, such as DNS
-# or WINS server addresses.
-# Google DNS.
-;push "dhcp-option DNS 8.8.8.8"
-;push "dhcp-option DNS 8.8.4.4"
-
-# The keepalive directive causes ping-like
-# messages to be sent back and forth over
-# the link so that each side knows when
-# the other side has gone down.
-keepalive 10 120
-
-# The maximum number of concurrently connected
-# clients we want to allow.
-max-clients 5
-
-# It's a good idea to reduce the OpenVPN
-# daemon's privileges after initialization.
-user nobody
-group nobody
-
-# The persist options will try to avoid
-# accessing certain resources on restart
-# that may no longer be accessible because
-# of the privilege downgrade.
-persist-key
-persist-tun
-
-# Output a short status file showing
-# current connections, truncated
-# and rewritten every minute.
-status openvpn-status.log
-
-# Set the appropriate level of log
-# file verbosity.
-#
-# 0 is silent, except for fatal errors
-# 4 is reasonable for general usage
-# 5 and 6 can help to debug connection problems
-# 9 is extremely verbose
-verb 3
-
-# Notify the client that when the server restarts so it
-# can automatically reconnect.
-# Only usable with udp.
-explicit-exit-notify 1
-
-

# and ; are comments. Read each and every line, you might want to change some stuff (like the logging), specially the first line which is your server public IP.

-

Enable forwarding

-

Now, we need to enable packet forwarding (so we can access the web while connected to the VPN), which can be enabled on the interface level or globally (you can check the different options with sysctl -a | grep forward). I’ll do it globally, run:

-
sysctl net.ipv4.ip_forward=1
-
-

And create/edit the file /etc/sysctl.d/30-ipforward.conf:

-
net.ipv4.ip_forward=1
-
-

Now we need to configure ufw to forward traffic through the VPN. Append the following to /etc/default/ufw (or edit the existing line):

-
...
-DEFAULT_FORWARD_POLICY="ACCEPT"
-...
-
-

And change the /etc/ufw/before.rules, appending the following lines after the header but before the *filter line:

-
...
-# NAT (Network Address Translation) table rules
-*nat
-:POSTROUTING ACCEPT [0:0]
-
-# Allow traffic from clients to the interface
--A POSTROUTING -s 10.8.0.0/24 -o interface -j MASQUERADE
-
-# do not delete the "COMMIT" line or the NAT table rules above will not be processed
-COMMIT
-
-# Don't delete these required lines, otherwise there will be errors
-*filter
-...
-
-

Where interface must be changed depending on your system (in my case it’s ens3, another common one is eth0); I always check this by running ip addr which gives you a list of interfaces (the one containing your server public IP is the one you want, or whatever interface your server uses to connect to the internet):

-
...
-2: ens3: <SOMETHING,SOMETHING> bla bla
-    link/ether bla:bla
-    altname enp0s3
-    inet my.public.ip.addr bla bla
-...
-
-

And also make sure the 10.8.0.0/24 matches the subnet mask specified in the server.conf file (in this example it matches). You should check this very carefully, because I just spent a good 2 hours debugging why my configuration wasn’t working, and this was te reason (I could connect to the VPN, but had no external connection to the web).

-

Finally, allow the OpenVPN port you specified (in this example its 1194/udp) and reload ufw:

-
ufw allow 1194/udp comment "OpenVPN"
-ufw reload
-
-

At this point, the server-side configuration is done and you can start and enable the service:

-
systemctl start openvpn-server@server.service
-systemctl enable openvpn-server@server.service
-
-

Where the server after @ is the name of your configuration, server.conf without the .conf in my case.

-

Create client configurations

-

You might notice that I didn’t specify how to actually connect the VPN. For that we need a configuration file similar to the server.conf file that we created.

-

The real way of doing this would be to run similar steps as the ones with easy-rsa locally, send them to the server, sign them, and retrieve them. Fuck all that, we’ll just create all configuration files on the server as I was mentioning earlier.

-

Also, the client configuration file has to match the server one (to some degree), to make this easier you can create a client-common file in /etc/openvpn/server with the following content:

-
client
-dev tun
-remote 1.2.3.4 1194 udp # change this to match your ip and port
-resolv-retry infinite
-nobind
-persist-key
-persist-tun
-remote-cert-tls server
-auth SHA512
-verb 3
-
-

Where you should make any changes necessary, depending on your configuration.

-

Now, we need a way to create and revoke new configuration files. For this I created a script, heavily based on one of the links I mentioned at the beginning. You can place these scripts anywhere you like, and you should take a look before running them because you’ll be running them with elevated privileges (sudo).

-

In a nutshell, what it does is: generate a new client certificate keypair, update the CRL and create a new .ovpn configuration file that consists on the client-common data and all of the required certificates; or, revoke an existing client and refresh the CRL. The file is placed under ~/ovpn.

-

Create a new file with the following content (name it whatever you like) and don’t forget to make it executable (chmod +x vpn_script):

-
#!/bin/sh
-# Client ovpn configuration creation and revoking.
-MODE=$1
-if [ ! "$MODE" = "new" -a ! "$MODE" = "rev" ]; then
-    echo "$1 is not a valid mode, using default 'new'"
-    MODE=new
-fi
-
-CLIENT=${2:-guest}
-if [ -z $2 ];then
-    echo "there was no client name passed as second argument, using 'guest' as default"
-fi
-
-# Expiration config.
-EASYRSA_CERT_EXPIRE=3650
-EASYRSA_CRL_DAYS=3650
-
-# Current PWD.
-CPWD=$PWD
-cd /etc/easy-rsa/
-
-if [ "$MODE" = "rev" ]; then
-    easyrsa --batch revoke $CLIENT
-
-    echo "$CLIENT revoked."
-elif [ "$MODE" = "new" ]; then
-    easyrsa build-client-full $CLIENT nopass
-
-    # This is what actually generates the config file.
-    {
-    cat /etc/openvpn/server/client-common
-    echo "<ca>"
-    cat /etc/easy-rsa/pki/ca.crt
-    echo "</ca>"
-    echo "<cert>"
-    sed -ne '/BEGIN CERTIFICATE/,$ p' /etc/easy-rsa/pki/issued/$CLIENT.crt
-    echo "</cert>"
-    echo "<key>"
-    cat /etc/easy-rsa/pki/private/$CLIENT.key
-    echo "</key>"
-    echo "<tls-crypt>"
-    sed -ne '/BEGIN OpenVPN Static key/,$ p' /etc/openvpn/server/ta.key
-    echo "</tls-crypt>"
-    } > "$(eval echo ~${SUDO_USER:-$USER}/ovpn/$CLIENT.ovpn)"
-
-    eval echo "~${SUDO_USER:-$USER}/ovpn/$CLIENT.ovpn file generated."
-fi
-
-# Finish up, re-generates the crl
-easyrsa gen-crl
-chown nobody:nobody pki/crl.pem
-chmod o+r pki/crl.pem
-cd $CPWD
-
-

And the way to use is to run bash vpn_script <mode> <client_name> where mode is new or rev (revoke) as sudo (when revoking, it doesn’t actually delete the .ovpn file in ~/ovpn). Again, this is a little script that I put together, so you should check it out, it may need tweaks (specially depending on your directory structure for easy-rsa).

-

Now, just get the .ovpn file generated, import it to OpenVPN in your client of preference and you should have a working VPN service.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/website_with_nginx.html b/live/blog/a/website_with_nginx.html deleted file mode 100644 index 5c50c4f..0000000 --- a/live/blog/a/website_with_nginx.html +++ /dev/null @@ -1,284 +0,0 @@ - - - - - - -Set up a website with Nginx and Certbot -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Set up a website with Nginx and Certbot

- -

These are general notes on how to setup a Nginx web server plus Certbot for SSL certificates, initially learned from Luke’s video and after some use and research I added more stuff to the mix. And, actually at the time of writing this entry, I’m configuring the web server again on a new VPS instance, so this is going to be fresh.

-

As a side note, i use arch btw so everything here es aimed at an Arch Linux distro, and I’m doing everything on a VPS. Also note that most if not all commands here are executed with root privileges.

-

Table of contents

- -

Prerequisites

-

You will need two things:

- -

Nginx

-

Nginx is a web (HTTP) server and reverse proxy server.

-

You have two options: nginx and nginx-mainline. I prefer nginx-mainline because it’s the “up to date” package even though nginx is labeled to be the “stable” version. Install the package and enable/start the service:

-
pacman -S nginx-mainline
-systemctl enable nginx.service
-systemctl start nginx.service
-
-

And that’s it, at this point you can already look at the default initial page of Nginx if you enter the IP of your server in a web browser. You should see something like this:

-
-Nginx welcome page -
Nginx welcome page
-
-

As stated in the welcome page, configuration is needed, head to the directory of Nginx:

-
cd /etc/nginx
-
-

Here you have several files, the important one is nginx.conf, which as its name implies, contains general configuration of the web server. If you peek into the file, you will see that it contains around 120 lines, most of which are commented out and contains the welcome page server block. While you can configure a website in this file, it’s common practice to do it on a separate file (so you can scale really easily if needed for mor websites or sub-domains).

-

Inside the nginx.conf file, delete the server blocks and add the lines include sites-enabled/*; (to look into individual server configuration files) and types_hash_max_size 4096; (to get rid of an ugly warning that will keep appearing) somewhere inside the http block. The final nginx.conf file would look something like (ignoring the comments just for clarity, but you can keep them as side notes):

-
worker_processes 1;
-
-events {
-    worker_connections 1024;
-}
-
-http {
-    include sites-enabled/*;
-    include mime.types;
-    default_type application/octet-stream;
-
-    sendfile on;
-
-    keepalive_timeout 65;
-
-    types_hash_max_size 4096;
-}
-
-

Next, inside the directory /etc/nginx/ create the sites-available and sites-enabled directories, and go into the sites-available one:

-
mkdir sites-available
-mkdir sites-enabled
-cd sites-available
-
-

Here, create a new .conf file for your website and add the following lines (this is just the sample content more or less):

-
server {
-    listen 80;
-    listen [::]:80;
-
-    root /path/to/root/directory;
-    server_name domain.name another.domain.name;
-    index index.html anotherindex.otherextension;
-
-    location /{
-        try_files $uri $uri/ =404;
-    }
-}
-
-

That could serve as a template if you intend to add more domains.

-

Note some things:

- -

Then, make a symbolic link from this configuration file to the sites-enabled directory:

-
ln -s /etc/nginx/sites-available/your_config_file.conf /etc/nginx/sites-enabled
-
-

This is so the nginx.conf file can look up the newly created server configuration. With this method of having each server configuration file separate you can easily “deactivate” any website by just deleting the symbolic link in sites-enabled and you’re good, or just add new configuration files and keep everything nice and tidy.

-

All you have to do now is restart (or enable and start if you haven’t already) the Nginx service (and optionally test the configuration):

-
nginx -t
-systemctl restart nginx
-
-

If everything goes correctly, you can now go to your website by typing domain.name on a web browser. But you will see a “404 Not Found” page like the following (maybe with different Nginx version):

-
-Nginx 404 Not Found page -
Nginx 404 Not Found page
-
-

That’s no problem, because it means that the web server it’s actually working. Just add an index.html file with something simple to see it in action (in the /var/www/some_folder that you decided upon). If you keep seeing the 404 page make sure your root line is correct and that the directory/index file exists.

-

I like to remove the .html and trailing / on the URLs of my website, for that you need to add the following rewrite lines and modify the try_files line (for more: Sean C. Davis: Remove HTML Extension And Trailing Slash In Nginx Config):

-
server {
-    ...
-    rewrite ^(/.*)\.html(\?.*)?$ $1$2 permanent;
-    rewrite ^/(.*)/$ /$1 permanent;
-    ...
-    try_files $uri/index.html $uri.html $uri/ $uri =404;
-    ...
-
-

Certbot

-

Certbot is what provides the SSL certificates via Let’s Encrypt.

-

The only “bad” (bloated) thing about Certbot, is that it uses python, but for me it doesn’t matter too much. You may want to look up another alternative if you prefer. Install the packages certbot and certbot-nginx:

-
pacman -S certbot certbot-nginx
-
-

After that, all you have to do now is run certbot and follow the instructions given by the tool:

-
certbot --nginx
-
-

It will ask you for some information, for you to accept some agreements and the names to activate HTTPS for. Also, you will want to “say yes” to the redirection from HTTP to HTTPS. And that’s it, you can now go to your website and see that you have HTTPS active.

-

Now, the certificate given by certbot expires every 3 months or something like that, so you want to renew this certificate every once in a while. I did this before using cron or manually creating a systemd timer and service, but now it’s just a matter of enabling the certbot-renew.timer:

-
systemctl start certbot-renew.timer
-
-

The deploy-hook is not needed anymore, only for plugins. For more, visit the Arch Linux Wiki.

- - - - -
- -
- - - - \ No newline at end of file diff --git a/live/blog/a/xmpp_server_with_prosody.html b/live/blog/a/xmpp_server_with_prosody.html deleted file mode 100644 index 034bc50..0000000 --- a/live/blog/a/xmpp_server_with_prosody.html +++ /dev/null @@ -1,665 +0,0 @@ - - - - - - -Set up an XMPP server with Prosody compatible with Conversations and Movim -- Luévano's Blog - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - -
- -
-
- -
-

Set up an XMPP server with Prosody compatible with Conversations and Movim

- -

Update: I no longer host this XMPP server as it consumed a lot of resources and I wasn’t using it that much. I’ll probably re-create it in the future, though.

-

Recently I set up an XMPP server (and a Matrix one, too) for my personal use and for friends if they want one; made one for EL ELE EME for example. So, here are the notes on how I set up the server that is compatible with the Conversations app and the Movim social network. You can see my addresses at contact and the XMPP compliance/score of the server.

-

One of the best resources I found that helped me a lot was Installing and Configuring Prosody XMPP Server on Debian 9, the Arch Wiki and the oficial documentation.

-

As with my other entries, this is under a server running Arch Linux, with the Nginx web server and Certbot certificates. And all commands here are executed as root, unless specified otherwise.

-

Table of contents

- -

Prerequisites

-

Same as with my other entries (website, mail and git) plus:

- -

Prosody

-

Prosody is an implementation of the XMPP protocol that is flexible and extensible.

-

Install the prosody package (with optional dependencies) and the mercurial package:

-
pacman -S prosody, mercurial, lua52-sec, lua52-dbi, lua52-zlib
-
-

We need mercurial to be able to download and update the extra modules needed to make the server compliant with conversations.im and mov.im. Go to /var/lib/prosody, clone the latest Prosody modules repository and prepare the directories:

-
cd /var/lib/prosody
-hg clone https://hg.prosody.im/prosody-modules modules-available
-mkdir modules-enabled
-
-

You can see that I follow a similar approach that I used with Nginx and the server configuration, where I have all the modules available in a directory, and make a symlink to another to keep track of what is being used. You can update the repository by running hg pull --update while inside the modules-available directory (similar to Git).

-

Make symbolic links to the following modules:

-
ln -s /var/lib/prosody/modules-available/{module_name} /var/lib/prosody/modules-enabled/
-...
-
- -

And add other modules if needed, but these work for the apps that I mentioned. You should also change the permissions for these files:

-
chown -R prosody:prosody /var/lib/prosody
-
-

Now, configure the server by editing the /etc/prosody/prosody.cfg.lua file. It’s a bit tricky to configure, so here is my configuration file (lines starting with -- are comments). Make sure to change according to your domain, and maybe preferences. Read each line and each comment to know what’s going on, It’s easier to explain it with comments in the file itself than strip it in a lot of pieces.

-

And also, note that the configuration file has a “global” section and a per “virtual server”/”component” section, basically everything above all the VirtualServer/Component sections are global, and bellow each VirtualServer/Component, corresponds to that section.

-
-- important for systemd
-daemonize = true
-pidfile = "/run/prosody/prosody.pid"
-
--- or your account, not that this is an xmpp jid, not email
-admins = { "admin@your.domain" }
-
-contact_info = {
-    abuse = { "mailto:abuse@your.domain", "xmpp:abuse@your.domain" };
-    admin = { "mailto:admin@your.domain", "xmpp:admin@your.domain" };
-    admin = { "mailto:feedback@your.domain", "xmpp:feedback@your.domain" };
-    security = { "mailto:security@your.domain" };
-    support = { "mailto:support@your.domain", "xmpp:support@muc.your.domain" };
-}
-
--- so prosody look up the plugins we added
-plugin_paths = { "/var/lib/prosody/modules-enabled" }
-
-modules_enabled = {
-    -- Generally required
-        "roster"; -- Allow users to have a roster. Recommended ;)
-        "saslauth"; -- Authentication for clients and servers. Recommended if you want to log in.
-        "tls"; -- Add support for secure TLS on c2s/s2s connections
-        "dialback"; -- s2s dialback support
-        "disco"; -- Service discovery
-    -- Not essential, but recommended
-        "carbons"; -- Keep multiple clients in sync
-        "pep"; -- Enables users to publish their avatar, mood, activity, playing music and more
-        "private"; -- Private XML storage (for room bookmarks, etc.)
-        "blocklist"; -- Allow users to block communications with other users
-        "vcard4"; -- User profiles (stored in PEP)
-        "vcard_legacy"; -- Conversion between legacy vCard and PEP Avatar, vcard
-        "limits"; -- Enable bandwidth limiting for XMPP connections
-    -- Nice to have
-        "version"; -- Replies to server version requests
-        "uptime"; -- Report how long server has been running
-        "time"; -- Let others know the time here on this server
-        "ping"; -- Replies to XMPP pings with pongs
-        "register"; -- Allow users to register on this server using a client and change passwords
-        "mam"; -- Store messages in an archive and allow users to access it
-        "csi_simple"; -- Simple Mobile optimizations
-    -- Admin interfaces
-        "admin_adhoc"; -- Allows administration via an XMPP client that supports ad-hoc commands
-        --"admin_telnet"; -- Opens telnet console interface on localhost port 5582
-    -- HTTP modules
-        "http"; -- Explicitly enable http server.
-        "bosh"; -- Enable BOSH clients, aka "Jabber over HTTP"
-        "websocket"; -- XMPP over WebSockets
-        "http_files"; -- Serve static files from a directory over HTTP
-    -- Other specific functionality
-        "groups"; -- Shared roster support
-        "server_contact_info"; -- Publish contact information for this service
-        "announce"; -- Send announcement to all online users
-        "welcome"; -- Welcome users who register accounts
-        "watchregistrations"; -- Alert admins of registrations
-        "motd"; -- Send a message to users when they log in
-        --"legacyauth"; -- Legacy authentication. Only used by some old clients and bots.
-        --"s2s_bidi"; -- not yet implemented, have to wait for v0.12
-        "bookmarks";
-        "checkcerts";
-        "cloud_notify";
-        "csi_battery_saver";
-        "default_bookmarks";
-        "http_avatar";
-        "idlecompat";
-        "presence_cache";
-        "smacks";
-        "strict_https";
-        --"pep_vcard_avatar"; -- not compatible with this version of pep, wait for v0.12
-        "watchuntrusted";
-        "webpresence";
-        "external_services";
-    }
-
--- only if you want to disable some modules
-modules_disabled = {
-    -- "offline"; -- Store offline messages
-    -- "c2s"; -- Handle client connections
-    -- "s2s"; -- Handle server-to-server connections
-    -- "posix"; -- POSIX functionality, sends server to background, enables syslog, etc.
-}
-
-external_services = {
-    {
-        type = "stun",
-        transport = "udp",
-        host = "proxy.your.domain",
-        port = 3478
-    }, {
-        type = "turn",
-        transport = "udp",
-        host = "proxy.your.domain",
-        port = 3478,
-        -- you could decide this now or come back later when you install coturn
-        secret = "YOUR SUPER SECRET TURN PASSWORD"
-    }
-}
-
---- general global configuration
-http_ports = { 5280 }
-http_interfaces = { "*", "::" }
-
-https_ports = { 5281 }
-https_interfaces = { "*", "::" }
-
-proxy65_ports = { 5000 }
-proxy65_interfaces = { "*", "::" }
-
-http_default_host = "xmpp.your.domain"
-http_external_url = "https://xmpp.your.domain/"
--- or if you want to have it somewhere else, change this
-https_certificate = "/etc/prosody/certs/xmpp.your.domain.crt"
-
-hsts_header = "max-age=31556952"
-
-cross_domain_bosh = true
---consider_bosh_secure = true
-cross_domain_websocket = true
---consider_websocket_secure = true
-
-trusted_proxies = { "127.0.0.1", "::1", "192.169.1.1" }
-
-pep_max_items = 10000
-
--- this is disabled by default, and I keep it like this, depends on you
---allow_registration = true
-
--- you might want this options as they are
-c2s_require_encryption = true
-s2s_require_encryption = true
-s2s_secure_auth = false
---s2s_insecure_domains = { "insecure.example" }
---s2s_secure_domains = { "jabber.org" }
-
--- where the certificates are stored (/etc/prosody/certs by default)
-certificates = "certs"
-checkcerts_notify = 7 -- ( in days )
-
--- rate limits on connections to the server, these are my personal settings, because by default they were limited to something like 30kb/s
-limits = {
-    c2s = {
-        rate = "2000kb/s";
-    };
-    s2sin = {
-        rate = "5000kb/s";
-    };
-    s2sout = {
-        rate = "5000kb/s";
-    };
-}
-
--- again, this could be yourself, it is a jid
-unlimited_jids = { "admin@your.domain" }
-
-authentication = "internal_hashed"
-
--- if you don't want to use sql, change it to internal and comment the second line
--- since this is optional, i won't describe how to setup mysql or setup the user/database, that would be out of the scope for this entry
-storage = "sql"
-sql = { driver = "MySQL", database = "prosody", username = "prosody", password = "PROSODY USER SECRET PASSWORD", host = "localhost" }
-
-archive_expires_after = "4w" -- configure message archive
-max_archive_query_results = 20;
-mam_smart_enable = true
-default_archive_policy = "roster" -- archive only messages from users who are in your roster
-
--- normally you would like at least one log file of certain level, but I keep all of them, the default is only the info = "*syslog" one
-log = {
-    info = "*syslog";
-    warn = "prosody.warn";
-    error = "prosody.err";
-    debug = "prosody.debug";
-    -- "*console"; -- Needs daemonize=false
-}
-
--- cloud_notify
-push_notification_with_body = false -- Whether or not to send the message body to remote pubsub node
-push_notification_with_sender = false -- Whether or not to send the message sender to remote pubsub node
-push_max_errors = 5 -- persistent push errors are tolerated before notifications for the identifier in question are disabled
-push_max_devices = 5 -- number of allowed devices per user
-
--- by default every user on this server will join these muc rooms
-default_bookmarks = {
-    { jid = "room@muc.your.domain", name = "The Room" };
-    { jid = "support@muc.your.domain", name = "Support Room" };
-}
-
--- could be your jid
-untrusted_fail_watchers = { "admin@your.domain" }
-untrusted_fail_notification = "Establishing a secure connection from $from_host to $to_host failed. Certificate hash: $sha1. $errors"
-
------------ Virtual hosts -----------
-VirtualHost "your.domain"
-    name = "Prosody"
-    http_host = "xmpp.your.domain"
-
-disco_items = {
-    { "your.domain", "Prosody" };
-    { "muc.your.domain", "MUC Service" };
-    { "pubsub.your.domain", "Pubsub Service" };
-    { "proxy.your.domain", "SOCKS5 Bytestreams Service" };
-    { "vjud.your.domain", "User Directory" };
-}
-
-
--- Multi-user chat
-Component "muc.your.domain" "muc"
-    name = "MUC Service"
-    modules_enabled = {
-        --"bob"; -- not compatible with this version of Prosody
-        "muc_limits";
-        "muc_mam"; -- message archive in muc, again, a placeholder
-        "muc_mam_hints";
-        "muc_mention_notifications";
-        "vcard_muc";
-    }
-
-    restrict_room_creation = false
-
-    muc_log_by_default = true
-    muc_log_presences = false
-    log_all_rooms = false
-    muc_log_expires_after = "1w"
-    muc_log_cleanup_interval = 4 * 60 * 60
-
-
--- Upload
-Component "xmpp.your.domain" "http_upload"
-    name = "Upload Service"
-    http_host= "xmpp.your.domain"
-    -- you might want to change this, these are numbers in bytes, so 10MB and 100MB respectively
-    http_upload_file_size_limit = 1024*1024*10
-    http_upload_quota = 1024*1024*100
-
-
--- Pubsub
-Component "pubsub.your.domain" "pubsub"
-    name = "Pubsub Service"
-    pubsub_max_items = 10000
-    modules_enabled = {
-        "pubsub_feeds";
-        "pubsub_text_interface";
-    }
-
-    -- personally i don't have any feeds configured
-    feeds = {
-        -- The part before = is used as PubSub node
-        --planet_jabber = "http://planet.jabber.org/atom.xml";
-        --prosody_blog = "http://blog.prosody.im/feed/atom.xml";
-    }
-
-
--- Proxy
-Component "proxy.your.domain" "proxy65"
-    name = "SOCKS5 Bytestreams Service"
-    proxy65_address = "proxy.your.domain"
-
-
--- Vjud, user directory
-Component "vjud.your.domain" "vjud"
-    name = "User Directory"
-    vjud_mode = "opt-in"
-
-

You HAVE to read all of the configuration file, because there are a lot of things that you need to change to make it work with your server/domain. Test the configuration file with:

-
luac5.2 -p /etc/prosody/prosody.cfg.lua
-
-

Notice that by default prosody will look up certificates that look like sub.your.domain, but if you get the certificates like I do, you’ll have a single certificate for all subdomains, and by default it is in /etc/letsencrypt/live, which has some strict permissions. So, to import it you can run:

-
prosodyctl --root cert import /etc/letsencrypt/live
-
-

Ignore the complaining about not finding the subdomain certificates and note that you will have to run that command on each certificate renewal, to automate this, add the --deploy-hook flag to your automated Certbot renewal system; for me it’s a systemd timer with the following certbot.service:

-
[Unit]
-Description=Let's Encrypt renewal
-
-[Service]
-Type=oneshot
-ExecStart=/usr/bin/certbot renew --quiet --agree-tos --deploy-hook "systemctl reload nginx.service && prosodyctl --root cert import /etc/letsencrypt/live"
-
-

And if you don’t have it already, the certbot.timer:

-
[Unit]
-Description=Twice daily renewal of Let's Encrypt's certificates
-
-[Timer]
-OnCalendar=0/12:00:00
-RandomizedDelaySec=1h
-Persistent=true
-
-[Install]
-WantedBy=timers.target
-
-

Also, go to the certs directory and make the appropriate symbolic links:

-
cd /etc/prosody/certs
-ln -s your.domain.crt SUBDOMAIN.your.domain.crt
-ln -s your.domain.key SUBDOMAIN.your.domain.key
-...
-
-

That’s basically all the configuration that needs Prosody itself, but we still have to configure Nginx and Coturn before starting/enabling the prosody service.

-

Nginx configuration file

-

Since this is not an ordinary configuration file I’m going to describe this, too. Your prosody.conf file should have the following location blocks under the main server block (the one that listens to HTTPS):

-
# HTTPS server block
-server {
-    root /var/www/prosody/;
-    server_name xmpp.luevano.xyz muc.luevano.xyz pubsub.luevano.xyz vjud.luevano.xyz proxy.luevano.xyz;
-    index index.html;
-
-    # for extra https discovery (XEP-0256)
-    location /.well-known/acme-challenge {
-        allow all;
-    }
-
-    # bosh specific
-    location /http-bind {
-        proxy_pass  https://localhost:5281/http-bind;
-
-        proxy_set_header Host $host;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_buffering off;
-        tcp_nodelay on;
-    }
-
-    # websocket specific
-    location /xmpp-websocket {
-        proxy_pass https://localhost:5281/xmpp-websocket;
-
-        proxy_http_version 1.1;
-        proxy_set_header Connection "Upgrade";
-        proxy_set_header Upgrade $http_upgrade;
-
-        proxy_set_header Host $host;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_read_timeout 900s;
-    }
-
-    # general proxy
-    location / {
-        proxy_pass https://localhost:5281;
-
-        proxy_set_header Host $host;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_set_header X-Real-IP $remote_addr;
-    }
-    ...
-    # Certbot stuff
-}
-# HTTP server block (the one that certbot creates)
-server {
-    ...
-}
-
-

Also, you need to add the following to your actual your.domain (this cannot be a subdomain) configuration file:

-
server {
-    ...
-    location /.well-known/host-meta {
-        default_type 'application/xrd+xml';
-        add_header Access-Control-Allow-Origin '*' always;
-    }
-
-    location /.well-known/host-meta.json {
-        default_type 'application/jrd+json';
-        add_header Access-Control-Allow-Origin '*' always;
-    }
-    ...
-}
-
-

And you will need the following host-meta and host-meta.json files inside the .well-known/acme-challenge directory for your.domain (following my nomenclature: /var/www/yourdomaindir/.well-known/acme-challenge/).

-

For host-meta file:

-
<?xml version='1.0' encoding='utf-8'?>
-<XRD xmlns='http://docs.oasis-open.org/ns/xri/xrd-1.0'>
-    <Link rel="urn:xmpp:alt-connections:xbosh"
-        href="https://xmpp.your.domain:5281/http-bind" />
-    <Link rel="urn:xmpp:alt-connections:websocket"
-        href="wss://xmpp.your.domain:5281/xmpp-websocket" />
-</XRD>
-
-

And host-meta.json file:

-
{
-    "links": [
-        {
-            "rel": "urn:xmpp:alt-connections:xbosh",
-                "href": "https://xmpp.your.domain:5281/http-bind"
-        },
-        {
-            "rel": "urn:xmpp:alt-connections:websocket",
-                "href": "wss://xmpp.your.domain:5281/xmpp-websocket"
-        }
-    ]
-}
-
-

Remember to have your prosody.conf file symlinked (or discoverable by Nginx) to the sites-enabled directory. You can now test and restart your nginx service (and test the configuration, optionally):

-
nginx -t
-systemctl restart nginx.service
-
-

Coturn

-

Coturn is the implementation of TURN and STUN server, which in general is for (at least in the XMPP world) voice support and external service discovery.

-

Install the coturn package:

-
pacman -S coturn
-
-

You can modify the configuration file (located at /etc/turnserver/turnserver.conf) as desired, but at least you need to make the following changes (uncomment or edit):

-
use-auth-secret
-realm=proxy.your.domain
-static-auth-secret=YOUR SUPER SECRET TURN PASSWORD
-
-

I’m sure there is more configuration to be made, like using SQL to store data and whatnot, but for now this is enough for me. Note that you may not have some functionality that’s needed to create dynamic users to use the TURN server, and to be honest I haven’t tested this since I don’t use this feature in my XMPP clients, but if it doesn’t work, or you know of an error or missing configuration don’t hesitate to contact me.

-

Start/enable the turnserver service:

-
systemctl start turnserver.service
-systemctl enable turnserver.service
-
-

You can test if your TURN server works at Trickle ICE. You may need to add a user in the turnserver.conf to test this.

-

Wrapping up

-

At this point you should have a working XMPP server, start/enable the prosody service now:

-
systemctl start prosody.service
-systemctl enable prosody.service
-
-

And you can add your first user with the prosodyctl command (it will prompt you to add a password):

-
prosodyctl adduser user@your.domain
-
-

You may want to add a compliance user, so you can check if your server is set up correctly. To do so, go to XMPP Compliance Tester and enter the compliance user credentials. It should have similar compliance score to mine:

-

-

Additionally, you can test the security of your server in IM Observatory, here you only need to specify your domain.name (not xmpp.domain.name, if you set up the SRV DNS records correctly). Again, it should have a similar score to mine:

-

xmpp.net score

-

You can now log in into your XMPP client of choice, if it asks for the server it should be xmpp.your.domain (or your.domain for some clients) and your login credentials you@your.domain and the password you chose (which you can change in most clients).

-

That’s it, send me a message at david@luevano.xyz if you were able to set up the server successfully.

- - - - -
- -
- - - - \ No newline at end of file -- cgit v1.2.3-54-g00ecf