Así es, ya quedó acomodado el sub-dominio art.luevano.xyz pos pal arte veda. Entonces pues ando feliz por eso.
-
Este pedo fue gracias a que me reescribí la forma en la que pyssg maneja los templates, ahora uso el sistema de jinja en vez del cochinero que hacía antes.
Quienes me conocen sabrán que llevo como 2 años intentando contratar internet de fibra óptica (específicamente el de T*lm*x). El problema es que nunca había nodos/terminales disponibles o, la verdad, que los técnicos ni querían hacer su jale porque están acostumbrados a que les debes soltar una feria para que te la instalen.
-
Pues bueno, el punto es que me tocó estar aguantando la compañía horrible de *zz*, que sólo tiene cobre; el servicio es malo y a cada rato le suben de precio. Por esto último volví a checar precios de otras compañías para comparar y resulta que me estaban cobrando como $100 - $150 pesos extra con el mismo paquete que ya tenía/tengo. Hasta ahí estaba encabronado, y no ayudó nada que intenté hablar con los muy incompetentes de soporte y no pudieron digamos “resolverme”, porque ¿cómo es posible que siendo cliente de como 5 años ni si quiera pueden avisarme que ya tienen mejores paquetes (que la neta es el mismo paquete pero más barato)?
-
Intenté pedirles que me cambien al paquete actual (mismo todo, única diferencia el precio), pero resulta que me meterían a plazo forzoso. Obviamente esto me prendió un cuete en la cola y por eso chequé con T*lm*x, que a mi sorpresa salía que sí había fibra óptica disponible en mi cantón. Inicié el proceso de portabilidad y me dijeron que en como dos semanas me la instalaban, pero resulta que el basado del técnico me marcó al día siguiente para decirme que YA ESTABA AFUERA DE MI CASA para instalarlo. Gané.
-
Resulta que ahora sí hay nodos/terminales, de hecho instalaron 3 nuevos y están completamente vacíos, me tocó muy buena suerte y el muy basado del técnico se lo aventó en medio segundo sin ningún pedo, no me pidió nada más que detalles de dónde quería el módem. No tenía efectivo si no le soltaba un varo, se portó muy chingón.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/arch_logs_flooding_disk.html b/live/blog/a/arch_logs_flooding_disk.html
deleted file mode 100644
index eb2c835..0000000
--- a/live/blog/a/arch_logs_flooding_disk.html
+++ /dev/null
@@ -1,189 +0,0 @@
-
-
-
-
-
-
-Configure system logs on Arch to avoid filled up disk -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Configure system logs on Arch to avoid filled up disk
-
-
It’s been a while since I’ve been running a minimal server on a VPS, and it is a pretty humble VPS with just 32 GB of storage which works for me as I’m only hosting a handful of services. At some point I started noticing that the disk keept filling up on each time I checked.
-
Turns out that out of the box, Arch has a default config for systemd‘s journald that keeps a persistent journal log, but doesn’t have a limit on how much logging is kept. This means that depending on how many services, and how aggresive they log, it can be filled up pretty quickly. For me I had around 15 GB of logs, from the normal journal directory, nginx directory and my now unused prosody instance.
-
For prosody it was just a matter of deleting the directory as I’m not using it anymore, which freed around 4 GB of disk space.
-For journal I did a combination of configuring SystemMaxUse and creating a Namespace for all “email” related services as mentioned in the Arch wiki: systemd/Journal; basically just configuring /etc/systemd/journald.conf (and /etc/systemd/journald@email.conf with the comment change) with:
-
[Journal]
-Storage=persistent
-SystemMaxUse=100MB # 50MB for the "email" Namespace
-
-
And then for each service that I want to use this “email” Namespace I add:
-
[Service]
-LogNamespace=email
-
-
Which can be changed manually or by executing systemctl edit service_name.service and it will create an override file which will be read on top of the normal service configuration. Once configured restart by running systemctl daemon-reload and systemctl restart service_name.service (probably also restart systemd-journald).
-
I also disabled the logging for ufw by running ufw logging off as it logs everything to the kernel “unit”, and I didn’t find a way to pipe its logs to a separate directory. It really isn’t that useful as most of the logs are the normal [UFW BLOCK] log, which is normal. If I need debugging then I’ll just enable that again. Note that you can change the logging level, if you still want some kind of logging.
-
Finally to clean up the nginx logs, you need to install logrotate (pacman -S logrotate) as that is what is used to clean up the nginx log directory. nginx already “installs” a config file for logrotate which is located at /etc/logrotate.d/, I just added a few lines:
Once you’re ok with your config, it’s just a matter of running logrotate -v -f /etc/logrotate.d/nginx which forces the run of the rule for nginx. After this, logrotate will be run daily if you enable the logrotate timer: systemctl enable logrotate.timer.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/asi_nomas_esta_quedando.html b/live/blog/a/asi_nomas_esta_quedando.html
deleted file mode 100644
index b3793a2..0000000
--- a/live/blog/a/asi_nomas_esta_quedando.html
+++ /dev/null
@@ -1,154 +0,0 @@
-
-
-
-
-
-
-Así nomás está quedando el página -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Así nomás está quedando el página
-
-
Estuve acomodando un poco más el sItIo, al fin agregué la “sección” de contact y de donate por si hay algún loco que quiere tirar varo.
-
También me puse a acomodar un servidor de XMPP el cual, en pocas palabras, es un protocolo de mensajería instantánea (y más) descentralizado, por lo cual cada quien puede hacer una cuenta en el servidor que quiera y conectarse con cuentas creadas en otro servidor… exacto, como con los correos electrónicos. Y esto está perro porque si tú tienes tu propio server, así como con uno de correo electrónico, puedes controlar qué características tiene, quiénes pueden hacer cuenta, si hay end-to-end encryption (o mínimo end-to-server), entre un montón de otras cosas.
-
Ahorita este server es SUMISO (compliant en español, jeje) para jalar con la app conversations y con la red social movim, pero realmente funcionaría con casi cualquier cliente de XMPP, amenos que ese cliente implemente algo que no tiene mi server. Y también acomodé un server de Matrix que es muy similar pero es bajo otro protocolo y se siente más como un discord/slack (al menos en el element), muy chingón también.
-
Si bien aún quedan cosas por hacer sobre estos dos servers que me acomodé (además de hacerles unas entradas para documentar cómo lo hice), quiero moverme a otra cosa que sería acomodar una sección de dibujos, lo cual en teoría es bien sencillo, pero como quiero poder automatizar la publicación de estos, quiero modificar un poco el pyssg para que jale chido para este pex.
-
Ya por último también quiero moverle un poco al CSS, porque lo dejé en un estado muy culerón y quiero meterle/ajustar unas cosas para que quede más limpio y medianamente bonito… dentro de lo que cabe porque evidentemente me vale verga si se ve como una página del 2000.
-
Actualización: Ya tumbé el servidor de XMPP porque consumía bastantes recursos y no lo usaba tanto, si en un futuro consigo un mejor servidor podría volver a hostearlo.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/devs_android_me_trozaron.html b/live/blog/a/devs_android_me_trozaron.html
deleted file mode 100644
index 0821362..0000000
--- a/live/blog/a/devs_android_me_trozaron.html
+++ /dev/null
@@ -1,162 +0,0 @@
-
-
-
-
-
-
-Los devs de Android/MIUI me trozaron -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Los devs de Android/MIUI me trozaron
-
-
Llevo dos semanas posponiendo esta entrada porque andaba bien enojado (todavía, pero ya se anda pasando) y me daba zzz. Pero bueno, antes que nada este pex ocupa un poco de contexto sobre dos cositas:
-
-
Tachiyomi: Una aplicación de android que uso para descargar y leer manga. Lo importante aquí es que por default se guardan los mangas con cada página siendo una sola imagen, por lo que al mover el manga de un lado a otro tarda mucho tiempo.
-
Adoptable storage: Un feature de android que básicamente te deja usar una micro SD (mSD) externa como si fuera interna, encriptando y dejando la mSD inutilizable en cualquier otro dispositivo. La memoria interna se pierde o algo por el estilo (bajo mi experiencia), por lo que parece es bastante útil cuando la capacidad de la memoria interna es baja.
-
-
Ahora sí vamonos por partes, primero que nada lo que sucedió fue que ordené una mSD con más capacidad que la que ya tenía (64 GB -> 512 GB, poggies), porque últimamente he estado bajando y leyendo mucho manga entonces me estaba quedando sin espacio. Ésta llegó el día de mi cumpleaños lo cuál estuvo chingón, me puse a hacer backup de la mSD que ya tenía y preparando todo, muy bonito, muy bonito.
-
Empecé a tener problemas, porque al estar moviendo tanto archivo pequeño (porque recordemos que el tachiyomi trata a cada página como una sola imagen), la conexión entre el celular y mi computadora se estaba corte y corte por alguna razón; en general muchos pedos. Por lo que mejor le saqué la nueva mSD y la metí directo a mi computadora por medio de un adaptador para batallar menos y que fuera más rápido.
-
Hacer este pedo de mover archivos directamente en la mSD puede llevar a corromper la memoria, no se los detalles pero pasa (o quizá estoy meco e hice algo mal). Por lo que al terminar de mover todo a la nueva mSD y ponerla en el celular, éste se emputó que porque no la detectaba y que quería tirar un formateo a la mSD. A este punto no me importaba mucho, sólo era questión de volvera mover archivos y ser más cuidadoso; “no issues from my end” diría en mis standups.
-
Todo valió vergota porque en cierto punto al elegir sí formatear la mSD mi celular me daba la opción de “usar la micro SD para el celular” o “usar la micro SD como memoria portátil” (o algo entre esas líneas), y yo, estúpidamente, elegí la primera, porque me daba sentido: “no, pues simón, voy a usar esta memoria para este celular”.
-
Pues mamé, resulta que esa primera opción lo que realmente quería decir es que se iba a usar la micro SD como interna usando el pex este de adoptable storage. Entonces básicamente perdí mi capacidad de memoria interna (128 GB aprox.), y toda la mSD nueva se usó como memoria interna. Todo se juntó, si intentaba sacar la mSD todo se iba a la mierda y no podía usar muchas aplicaciones. “No hay pedo”, pensé, “nada más es cuestión de desactivar esta mamada de adoptable storage”.
-
Ni madres dijeron los devs de Android, este pedo nada más es un one-way: puedes activar adoptable storage pero para desactivarlo ocupas, a huevo, formatear tu celular a estado de fábrica. Chingué a mi madre, comí mierda, perdí.
-
Pues eso fue lo que hice, ni modo. Hice backup de todo lo que se me ocurrió (también me di cuenta que G**gl* authenticator es cagada ya que no te deja hacer backup, entre otras cosas, mejor usen Aegis authenticator), desactivé todo lo que se tenía que desactivar y tocó hacer factory reset, ni modo. Pero como siempre las cosas salen mal y tocó comer mierda del banco porque me bloquearon la tarjeta, perdí credenciales necesarias para el trabajo (se resolvió rápido), etc., etc.. Ya no importa, ya casi todo está resuelto, sólo queda ir al banco a resolver lo de la tarjeta bloqueada (esto es para otro rant, pinches apps de bancos piteras, ocupan hacer una sola cosa y la hacen mal).
-
Al final del día, la causa del problema fueron los malditos mangas (por andar queriendo backupearlos), que terminé bajando de nuevo manualmente y resultó mejor porque aparentemente tachiyomi agregó la opción de “zippear” los mangas en formato CBZ, por lo que ya son más fácil de mover de un lado para otro, el fono no se queda pendejo, etc., etc..
-
Por último, quiero decir que los devs de Android son unos pendejos por no hacer reversible la opción de adoptable storage, y los de MIUI son todavía más por no dar detalles de lo que significan sus opciones de formateo, especialmente si una opción es tan chingadora que para revertirla necesitas formatear a estado de fábrica tu celular; más que nada es culpa de los de MIUI, todavía que ponen un chingo de A(i)DS en todas sus apps, no pueden poner una buena descripción en sus opciones. REEEE.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/el_blog_ya_tiene_timestamps.html b/live/blog/a/el_blog_ya_tiene_timestamps.html
deleted file mode 100644
index bc10bf1..0000000
--- a/live/blog/a/el_blog_ya_tiene_timestamps.html
+++ /dev/null
@@ -1,155 +0,0 @@
-
-
-
-
-
-
-Así es raza, el blog ya tiene timestamps -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Así es raza, el blog ya tiene timestamps
-
-
Pues eso, esta entrada es sólo para tirar update sobre mi primer post. Ya modifiqué el ssg lo suficiente como para que maneje los timestamps, y ya estoy más familiarizado con este script entonces ya lo podré extender más, pero por ahora las entradas ya tienen su fecha de creación (y modificación en dado caso) al final y en el índice ya están organizados por fecha, que por ahora está algo simple pero está sencillo de extender.
-
Ya lo único que queda es cambiar un poco el formato del blog (y de la página en general), porque en un momento de desesperación puse todo el texto en justificado y pues no se ve chido siempre, entonces queda corregir eso. Y aunque me tomó más tiempo del que quisiera, así nomás quedó, diría un cierto personaje.
-
El ssg modificado está en mis dotfiles (o directamente aquí).
-Como al final ya no usé el ssg modificado, este pex ya no existe.
-
Por último, también quité las extensiones .html de las URLs, porque se ve bien pitero, pero igual los links con .html al final redirigen a su link sin .html, así que no hay rollo alguno.
-
Actualización: Ahora estoy usando mi propia solución en vez de ssg, que la llamé pyssg, de la cual empiezo a hablar acá.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/first_blog_post.html b/live/blog/a/first_blog_post.html
deleted file mode 100644
index 314abb1..0000000
--- a/live/blog/a/first_blog_post.html
+++ /dev/null
@@ -1,147 +0,0 @@
-
-
-
-
-
-
-This is the first blog post, just for testing purposes -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
This is the first blog post, just for testing purposes
-
-
I’m making this post just to figure out how ssg5 and lowdown are supposed to work, and eventually rssg.
-
At the moment I’m not satisfied because there’s no automatic date insertion into the 1) html file, 2) the blog post itself and 3) the listing system in the blog homepage which also has a problem with the ordering of the entries. And all of this just because I didn’t want to use Luke’s lb solution as I don’t really like that much how he handles the scripts (but they just work).
-
Hopefully, for tomorrow all of this will be sorted out and I’ll have a working blog system.
-
Update: I’m now using my own solution which I called pyssg, of which I talk about here.
Note that this is mostly for personal use, so there’s no user/authentication control other than that of normal ssh. And as with the other entries, most if not all commands here are run as root unless stated otherwise.
I might get tired of saying this (it’s just copy paste, basically)… but you will need the same prerequisites as before (check my website and mail entries), with the extras:
-
-
(Optional, if you want a “front-end”) A CNAME for “git” and (optionally) “www.git”, or some other name for your sub-domains.
-
An SSL certificate, if you’re following the other entries, add a git.conf and run certbot --nginx to extend the certificate.
If not installed already, install the git package:
-
pacman -S git
-
-
On Arch Linux, when you install the git package, a git user is automatically created, so all you have to do is decide where you want to store the repositories, for me, I like them to be on /home/git like if git was a “normal” user. So, create the git folder (with corresponding permissions) under /home and set the git user’s home to /home/git:
Also, the git user is “expired” by default and will be locked (needs a password), change that with:
-
chage -E -1 git
-passwd git
-
-
Give it a strong one and remember to use PasswordAuthentication no for ssh (as you should). Create the .ssh/authorized_keys for the git user and set the permissions accordingly:
Now is a good idea to copy over your local SSH public keys to this file, to be able to push/pull to the repositories. Do it by either manually copying it or using ssh‘s built in ssh-copy-id (for that you may want to check your ssh configuration in case you don’t let people access your server with user/password).
-
Next, and almost finally, we need to edit the git-daemon service, located at /usr/lib/systemd/system/ (called git-daemon@.service):
I just appended --enable=receive-pack and note that I also changed the --base-path to reflect where I want to serve my repositories from (has to match what you set when changing git user’s home).
-
Now, go ahead and start and enable the git-daemon socket:
You’re basically done. Now you should be able to push/pull repositories to your server… except, you haven’t created any repository in your server, that’s right, they’re not created automatically when trying to push. To do so, you have to run (while inside /home/git):
Those two lines above will need to be run each time you want to add a new repository to your server. There are options to “automate” this but I like it this way.
-
After that you can already push/pull to your repository. I have my repositories (locally) set up so I can push to more than one remote at the same time (my server, GitHub, GitLab, etc.); to do so, check this gist.
Where the server_name line depends on you, I have mine setup to git.luevano.xyz and www.git.luevano.xyz. Optionally run certbot --nginx to get a certificate for those domains if you don’t have already.
-
Now, all that’s left is to configure cgit. Create the configuration file /etc/cgitrc with the following content (my personal options, pretty much the default):
Where you can uncomment the robots line to not let web crawlers (like Google’s) to index your git web app. And at the end keep all your repositories (the ones you want to make public), for example for my dotfiles I have:
-
...
-repo.url=.dots
-repo.path=/home/git/.dots.git
-repo.owner=luevano
-repo.desc=These are my personal dotfiles.
-...
-
-
Otherwise you could let cgit to automatically detect your repositories (you have to be careful if you want to keep “private” repos) using the option scan-path and setup .git/description for each repository. For more, you can check cgitrc(5).
And edit it to use the version 3 and add --inline-css for more options without editing cgit‘s CSS file:
-
...
-# This is for version 2
-# exec highlight --force -f -I -X -S "$EXTENSION" 2>/dev/null
-
-# This is for version 3
-exec highlight --force --inline-css -f -I -O xhtml -S "$EXTENSION" 2>/dev/null
-...
-
-
Finally, enable the filter in /etc/cgitrc configuration:
That would be everything. If you need support for more stuff like compressed snapshots or support for markdown, check the optional dependencies for cgit.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/hoy_toco_desarrollo_personaje.html b/live/blog/a/hoy_toco_desarrollo_personaje.html
deleted file mode 100644
index d66c4cd..0000000
--- a/live/blog/a/hoy_toco_desarrollo_personaje.html
+++ /dev/null
@@ -1,160 +0,0 @@
-
-
-
-
-
-
-Hoy me tocó desarrollo de personaje -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Hoy me tocó desarrollo de personaje
-
-
Sabía que hoy no iba a ser un día tan bueno, pero no sabía que iba a estar tan horrible; me tocó desarrollo de personaje y saqué el bad ending.
-
Básicamente tenía que cumplir dos misiones hoy: ir al banco a un trámite y vacunarme contra el Covid-19. Muy sencillas tareas.
-
Primero que nada me levanté de una pesadilla horrible en la que se puede decir que se me subió el muerto al querer despertar, esperé a que fuera casi la hora de salida de mi horario de trabajo, me bañé y fui directo al banco primero. Todo bien hasta aquí.
-
En el camino al banco, durante la plática con el conductor del Uber salió el tema del horario del banco. Yo muy tranquilo dije “pues voy algo tarde, pero sí alcanzo, cierran a las 5, ¿no?” a lo que me respondió el conductor “nel jefe, a las 4, y se van media hora antes”; quedé. Chequé y efectivamente cerraban a las 4. Entonces le dije que le iba a cambiar la ruta directo a donde me iba a vacunar, pero ya era muy tarde y quedaba para la dirección opuesta.”Ni pedo, ahí déjame y pido otro viaje, no te apures”, le dije y como siempre pues me deseó que se compusiera mi día; afortunadamente el banco sí estaba abierto para lo que tenía que hacer, así que fue un buen giro. Me puse muy feliz y asumí que sería un buen día, como me lo dijo mi conductor; literalmente NO SABÍA.
-
Salí feliz de poder haber completado esa misión y poder irme a vacunar. Pedí otro Uber a donde tenía que ir y todo bien. Me tocó caminar mucho porque la entrada estaba en punta de la chingada de donde me dejó el conductor, pero no había rollo, era lo de menos. Me desanimé cuando vi que había una cantidad estúpida de gente, era una fila que abarcaba todo el estacionamiento y daba demasiadas vueltas; “ni pedo”, dije, “si mucho me estaré aquí una hora, hora y media”… otra vez, literalmente NO SABÍA.
-
Pasó media hora y había avanzado lo que parecía ser un cuarto de la fila, entonces todo iba bien. Pues nel, había avanzado el equivalente a un octavo de la fila, este pedo no iba a salir en una hora-hora y media. Para acabarla de chingar era todo bajo el tan amado sol de Chiwawa. “No hay pedo, me entretengo tirando chal con alguien en el wasap”, pues no, aparentemente no cargué el celular y ya tenía 15-20% de batería… volví a quedar.
-
Se me acabó la pila, ya había pasado una hora y parecía que la fila era infinita, simplemente avanzábamos demasiado lento, a pesar de que los que venían atrás de mí repetían una y otra vez “mira, avanza bien rápido, ya mero llegamos”, ilusos. Duré aproximadamente 3 horas formado, aguantando conversaciones estúpidas a mi alrededor, gente quejándose por estar parada (yo también me estaba quejando pero dentro de mi cabeza), y por alguna razón iban familias completas de las cuales al final del día sólo uno o dos integrantes de la familia entraban a vacunarse.
-
En fin que se acabó la tortura y ya tocaba irse al cantón, todo bien. “No hay pedo, no me tocó irme en Uber, aquí agarro un camíon” pensé. Pero no, ningún camión pasó durante la hora que estuve esperando y de los 5 taxis que intenté parar NINGUNO se detuvo. Decidí irme caminado, ya qué más daba, en ese punto ya nada más era hacer corajes dioquis.
-
En el camino vi un Oxxo y decidí desviarme para comprar algo de tomar porque andaba bien deshidratado. En el mismo segundo que volteé para ir hacia el Oxxo pasó un camión volando y lo único que pensaba era que el conductor me decía “Jeje ni pedo:)”. Exploté, me acabé, simplemente perdí, saqué el bad ending.
-
Ya estaba harto y hasta iba a comprar un cargador para ya irme rápido, estaba cansado del día, simplemente ahí terminó la quest, había sacado el peor final. Lo bueno es que se me ocurrió pedirle al cajero un cargador y que me tirara paro. Todo bien, pedí mi Uber y llegué a mi casa sano y a salvo, pero con la peor rabia que me había dado en mucho tiempo. Simplemente ¿mi culo? explotado. Este día me tocó un desarrollo de personaje muy cabrón, se mamó el D*****o.
-
Lo único rescatable fue que había una (más bien como 5) chica muy guapa en la fila, lástima que los stats de mi personaje me tienen bloqueadas las conversaciones con desconocidos.
-
Y pues ya, este pex ya me sirvió para desahogarme, una disculpa por la redacción tan pitera. Sobres.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/jellyfin_server_with_sonarr_radarr.html b/live/blog/a/jellyfin_server_with_sonarr_radarr.html
deleted file mode 100644
index 44f0d0a..0000000
--- a/live/blog/a/jellyfin_server_with_sonarr_radarr.html
+++ /dev/null
@@ -1,686 +0,0 @@
-
-
-
-
-
-
-Set up a media server with Jellyfin, Sonarr and Radarr -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Set up a media server with Jellyfin, Sonarr and Radarr
-
-
Second part of my self hosted media server. This is a direct continuation of Set up qBitTorrent with Jackett for use with Starr apps, which will be mentioned as “first part” going forward. Sonarr, Radarr, Bazarr (Starr apps) and Jellyfin setups will be described in this part. Same introduction applies to this entry, regarding the use of documentation and configuration.
-
Everything here is performed in arch btw and all commands should be run as root unless stated otherwise.
-
Kindly note that I do not condone the use of BitTorrent for illegal activities. I take no responsibility for what you do when setting up anything shown here. It is for you to check your local laws before using automated downloaders such as Sonarr and Radarr.
Radarr is a movie collection manager that can be used to download movies via torrents. This is actually a fork of Sonarr, so they’re pretty similar, I just wanted to set up movies first.
-
Install from the AUR with yay:
-
yay -S radarr
-
-
Add the radarr user to the servarr group:
-
gpasswd -a radarr servarr
-
-
The default port that Radarr uses is 7878 for http (the one you need for the reverse proxy).
This will start the service and create the default configs under /var/lib/radarr. You need to change the URLBase as the reverse proxy is under a subdirectory (/radarr). Edit /var/lib/radarr/config.xml:
-
...
-<UrlBase>/radarr</UrlBase>
-...
-
-
Then restart the radarr service:
-
systemctl restart radarr.service
-
-
Now https://isos.yourdomain.com/radarr is accessible. Secure the instance right away by adding authentication under Settings -> General -> Security. I added the “Forms” option, just fill in the username and password then click on save changes on the top left of the page. You can restart the service again and check that it asks for login credentials.
This is personal preference and it dictates your preferred file sizes. You can follow TRaSH: Quality settings to maximize the quality of the downloaded content and restrict low quality stuff.
-
Personally, I think TRaSH’s quality settings are a bit elitist and first world-y. I’m fine with whatever and the tracker I’m using has the quality I want anyways. I did, however, set it to a minimum of 0 and maximum of 400 for the qualities shown in TRaSH’s guide. Configuring anything below 720p shouldn’t be necessary anyways.
Again, this is also completely a personal preference selection and depends on the quality and filters you want. My custom format selections are mostly based on TRaSH: HD Bluray + WEB quality profile.
-
The only Unwanted format that I’m not going to use is the Low Quality (LQ) as it blocks one of the sources I’m using to download a bunch of movies. The reasoning behind the LQ custom format is that these release groups don’t care much about quality (they keep low file sizes) and name tagging, which I understand but I’m fine with this as I can upgrade movies individually whenever I want (I want a big catalog of content that I can quickly watch).
As mentioned in Custom Formats and Quality this is completly a personal preference. I’m going to go for “Low Quality” downloads by still following some of the conventions from TRaSH. I’m using the TRaSH: HD Bluray + WEB quality profile with the exclusion of the LQ profile.
-
I set the name to “HD Bluray + WEB”. I’m also not upgrading the torrents for now. Language set to “Original”.
Pretty straight forward, just click on the giant “+” button and click on the qBitTorrent option. Then configure:
-
-
Name: can be anything, just an identifier.
-
Enable: enable it.
-
Host: use 127.0.0.1. For some reason I can’t make it work with the reverse proxied qBitTorrent.
-
Port: the port number you chose, 30000 in my case.
-
Url Base: leave blank as even though you have it exposed under /qbt, the service itself is not.
-
Username: the Web UI username, admin by default.
-
Password: the Web UI username, adminadmin by default (you should’ve changed it if you have the service exposed).
-
Category: movies.
-
-
Everything else can be left as default, but maybe change Completed Download Handling if you’d like. Same goes for the general Failed Download Handling download clients’ option.
Also easy to set up, also just click on the giant “+” button and click on the custom Torznab option (you can also use the preset -> Jackett Torznab option). Then configure:
-
-
Name: can be anything, just an identifier. I like to do “Jackett - INDEXER”, where “INDEXER” is just an identifier.
-
URL: http://127.0.0.1:9117/jack/api/v2.0/indexers/YOURINDEXER/results/torznab/, where YOURINDEXER is specific to each indexer (yts, nyaasi, etc.). Can be directly copied from the indexer’s “Copy Torznab Feed” button on the Jackett Web UI.
-
API Path: /api, leave as is.
-
API Key: this can be found at the top right corner in Jackett’s Web UI.
-
Categories: which categories to use when searching, these are generic categories until you test/add the indexer. After you add the indexer you can come back and select your prefered categories (like just toggling the movies categories).
-
Tags: I like to add a tag for the indexer name like yts or nyaa. This is useful to control which indexers to use when adding new movies.
-
-
Everything else on default. Download Client can also be set, which can be useful to keep different categories per indexer or something similar. Seed Ratio and Seed Time can also be set and are used to manage when to stop the torrent, this can also be set globally on the qBitTorrent Web UI, this is a personal setting.
You can now start to download content by going to Movies -> Add New. Basically just follow the Radarr: How to add a movie guide. The screenshots from the guide are a bit outdated but it contains everything you need to know.
-
I personally use:
-
-
Monitor: Movie Only.
-
Minimum Availability: Released.
-
Quiality Profile: “HD Bluray + WEB”, the one configured in this entry.
-
Tags: the indexer name I want to use to download the movie, usually just yts for me (remember this is a “LQ” release group, so if you have that custom format disable it) as mentioned in Indexers. If you don’t specify a tag it will only use indexers that don’t have a tag set.
-
Start search for missing movie: toggled on. Immediatly start searching for the movie and start the download.
-
-
Once you click on “Add Movie” it will add it to the Movies section and start searching and selecting the best torrent it finds, according to the “filters” (quality settings, profile and indexer(s)).
-
When it selects a torrent it sends it to qBitTorrent and you can even go ahead and monitor it over there. Else you can also monitor at Activity -> Queue.
-
After the movie is downloaded and processed by Radarr, it will create the appropriate hardlinks to the media/movies directory, as set in First part: Directory structure.
Sonarr is a TV series collection manager that can be used to download series via torrents. Most of the install process, configuration and whatnot is going to be basically the same as with Radarr.
-
Install from the AUR with yay:
-
yay -S sonarr
-
-
Add the sonarr user to the servarr group:
-
gpasswd -a sonarr servarr
-
-
The default port that Radarr uses is 8989 for http (the one you need for the reverse proxy).
This will start the service and create the default configs under /var/lib/sonarr. You need to change the URLBase as the reverse proxy is under a subdirectory (/sonarr). Edit /var/lib/sonarr/config.xml:
-
...
-<UrlBase>/sonarr</UrlBase>
-...
-
-
Then restart the sonarr service:
-
systemctl restart sonarr.service
-
-
Now https://isos.yourdomain.com/sonarr is accessible. Secure the instance right away by adding authentication under Settings -> General -> Security. I added the “Forms” option, just fill in the username and password then click on save changes on the top left of the page. You can restart the service again and check that it asks for login credentials.
Similar to Radarr: Quality this is personal preference and it dictates your preferred file sizes. You can follow TRaSH: Quality settings to maximize the quality of the downloaded content and restrict low quality stuff.
-
Will basically do the same as in Radarr: Quality: set minimum of 0 and maximum of 400 for everything 720p and above.
This is a bit different than with Radarr, the way it is configured is by setting “Release profiles”. I took the profiles from TRaSH: WEB-DL Release profile regex. The only possible change I’ll do is disable the Low Quality Groups and/or the “Golden rule” filter (for x265 encoded video).
-
For me it ended up looking like this:
-
-
But yours can differ as its mostly personal preference. For the “Quality profile” I’ll be using the default “HD-1080p” most of the time, but I also created a “HD + WEB (720/1080)” which works best for some.
Exactly the same as with Radarr: Download clients only change is the category from movies to tv (or whatever you want), click on the giant “+” button and click on the qBitTorrent option. Then configure:
-
-
Name: can be anything, just an identifier.
-
Enable: enable it.
-
Host: use 127.0.0.1.
-
Port: the port number you chose, 30000 in my case.
-
Url Base: leave blank as even though you have it exposed under /qbt, the service itself is not.
-
Username: the Web UI username, admin by default.
-
Password: the Web UI username, adminadmin by default (you should’ve changed it if you have the service exposed).
-
Category: tv.
-
-
Everything else can be left as default, but maybe change Completed Download Handling if you’d like. Same goes for the general Failed Download Handling download clients’ option.
Also exactly the same as with Radarr: Indexers, click on the giant “+” button and click on the custom Torznab option (this doesn’t have the Jackett preset). Then configure:
-
-
Name: can be anything, just an identifier. I like to do “Jackett - INDEXER”, where “INDEXER” is just an identifier.
-
URL: http://127.0.0.1:9117/jack/api/v2.0/indexers/YOURINDEXER/results/torznab/, where YOURINDEXER is specific to each indexer (eztv, nyaasi, etc.). Can be directly copied from the indexer’s “Copy Torznab Feed” button on the Jackett Web UI.
-
API Path: /api, leave as is.
-
API Key: this can be found at the top right corner in Jackett’s Web UI.
-
Categories: which categories to use when searching, these are generic categories until you test/add the indexer. After you add the indexer you can come back and select your prefered categories (like just toggling the TV categories).
-
Tags: I like to add a tag for the indexer name like eztv or nyaa. This is useful to control which indexers to use when adding new series.
-
-
Everything else on default. Download Client can also be set, which can be useful to keep different categories per indexer or something similar. Seed Ratio and Seed Time can also be set and are used to manage when to stop the torrent, this can also be set globally on the qBitTorrent Web UI, this is a personal setting.
Almost the same as with Radarr: Download content, but I’ve been personally selecting the torrents I want to download for each season/episode so far, as the indexers I’m using are all over the place and I like consistencies. Will update if I find a (near) 100% automation process, but I’m fine with this anyways as I always monitor that everything is going fine.
-
Add by going to Series -> Add New. Basically just follow the Sonarr: Library add new guide. Adding series needs a bit more options that movies in Radarr, but it’s straight forward.
-
I personally use:
-
-
Monitor: All Episodes.
-
Quiality Profile: “HD + WEB (720/1080)”. This depends on what I want for that how, lately I’ve been experimenting with this one.
-
Series Type: Standard. For now I’m just downloading shows, but it has an Anime option.
-
Tags: the “indexer_name” I want to use to download the movie, I’ve been using all indexers so I just use all tags as I’m experimenting and trying multiple options.
-
Season Folder: enabled. I like as much organization as possible.
-
Start search for missing episodes: disabled. Depends on you, due to my indexers, I prefer to check manually the season packs, for example.
-
Start search for cutoff unmet episodes: disabled. Honestly don’t really know what this is.
-
-
Once you click on “Add X” it will add it to the Series section and will start as monitored. So far I haven’t noticed that it immediately starts downloading (because of the “Start search for missing episodes” setting) but I always click on unmonitor the series, so I can manually check (again, due to the low quality of my indexers).
-
When it automatically starts to download an episode/season it will send it to qBitTorrent and you can monitor it over there. Else you can also monitor at Activity -> Queue. Same thing goes if you download manually each episode/season via the interactive search.
-
To interactively search episodes/seasons go to Series and then click on any series, then click either on the interactive search button for the episode or the season, it is an icon of a person as shown below:
-
-
Then it will bring a window with the search results, where it shows the indexer it got the result from, the size of the torrent, peers, language, quality, the score it received from the configured release profiles an alert in case that the torrent is “bad” and the download button to manually download the torrent you want. An example shown below:
-
-
After the movie is downloaded and processed by Sonarr, it will create the appropriate hardlinks to the media/tv directory, as set in Directory structure.
Jellyfin is a media server “manager”, usually used to manage and organize video content (movies, TV series, etc.) which could be compared with Plex or Emby for example (take them as possible alternatives).
-
Install from the AUR with yay:
-
yay -S jellyfin-bin
-
-
I’m installing the pre-built binary instead of building it as I was getting a lot of errors and the server was even crashing. You can try installing jellyfin instead.
-
Add the jellyfin user to the servarr group:
-
gpasswd -a jellyfin servarr
-
-
You can already start/enable the jellyfin.service which will start at http://127.0.0.1:8096/ by default where you need to complete the initial set up. But let’s create the reverse proxy first then start everything and finish the set up.
I’m going to have my jellyfin instance under a subdomain with an nginx reverse proxy as shown in the Arch wiki. For that, create a jellyfin.conf at the usual sites-<available/enabled> path for nginx:
-
server {
- listen 80;
- server_name jellyfin.yourdomain.com; # change accordingly to your wanted subdomain and domain name
- set $jellyfin 127.0.0.1; # jellyfin is running at localhost (127.0.0.1)
-
- # Security / XSS Mitigation Headers
- add_header X-Frame-Options "SAMEORIGIN";
- add_header X-XSS-Protection "1; mode=block";
- add_header X-Content-Type-Options "nosniff";
-
- # Content Security Policy
- # See: https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
- # Enforces https content and restricts JS/CSS to origin
- # External Javascript (such as cast_sender.js for Chromecast) must be whitelisted.
- add_header Content-Security-Policy "default-src https: data: blob: http://image.tmdb.org; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' https://www.gstatic.com/cv/js/sender/v1/cast_sender.js https://www.youtube.com blob:; worker-src 'self' blob:; connect-src 'self'; object-src 'none'; frame-ancestors 'self'";
-
- location = / {
- return 302 https://$host/web/;
- }
-
- location / {
- # Proxy main Jellyfin traffic
- proxy_pass http://$jellyfin:8096;
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-Protocol $scheme;
- proxy_set_header X-Forwarded-Host $http_host;
-
- # Disable buffering when the nginx proxy gets very resource heavy upon streaming
- proxy_buffering off;
- }
-
- # location block for /web - This is purely for aesthetics so /web/#!/ works instead of having to go to /web/index.html/#!/
- location = /web/ {
- # Proxy main Jellyfin traffic
- proxy_pass http://$jellyfin:8096/web/index.html;
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-Protocol $scheme;
- proxy_set_header X-Forwarded-Host $http_host;
- }
-
- location /socket {
- # Proxy Jellyfin Websockets traffic
- proxy_pass http://$jellyfin:8096;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection "upgrade";
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-Protocol $scheme;
- proxy_set_header X-Forwarded-Host $http_host;
- }
-}
-
Similarly to the isos subdomain, that will autodetect the new subdomain and extend the existing certificate(s). Restart the nginx service for changes to take effect:
Then navigate to https://jellyfin.yourdomain.com and either continue with the set up wizard if you didn’t already or continue with the next steps to configure your libraries.
-
The initial setup wizard makes you create an user (will be the admin for now) and at least one library, though these can be done later. For more check Jellyfin: Quick start.
-
Remember to use the configured directory as mentioned in Directory structure. Any other configuration (like adding users or libraries) can be done at the dashboard: click on the 3 horizontal lines on the top left of the Web UI then navigate to Administration -> Dashboard. I didn’t configure much other than adding a couple of users for me and friends, I wouldn’t recommend using the admin account to watch (personal preference).
-
Once there is at least one library it will show at Home along with the latest movies (if any) similar to the following (don’t judge, these are just the latest I added due to friend’s requests):
-
-
And inside the “Movies” library you can see the whole catalog where you can filter or just scroll as well as seeing Suggestions (I think this starts getting populated after a while) and Genres:
You can also install/activate plugins to get extra features. Most of the plugins you might want to use are already available in the official repositories and can be found in the “Catalog”. There are a lot of plugins that are focused around anime and TV metadata, as well as an Open Subtitles plugin to automatically download missing subtitles (though this is managed with Bazarr).
-
To activate plugins click on the 3 horizontal lines on the top left of the Web UI then navigate to Administration -> Dashboard -> Advanced -> Plugins and click on the Catalog tab (top of the Web UI). Here you can select the plugins you want to install. By default only the official ones are shown, for more you can add more repositories.
-
The only plugin I’m using is the “Playback Reporting”, to get a summary of what is being used in the instance. But I might experiment with some anime-focused plugins when the time comes.
Although not recommended and explicitly set to not download any x265/HEVC content (by using the Golden rule) there might be cases where the only option you have is to download such content. If that is the case and you happen to have a way to do Hardware Acceleration, for example by having an NVIDIA graphics card (in my case) then you should enable it to avoid using lots of resources on your system.
-
Using hardware acceleration will leverage your GPU to do the transcoding and save resources on your CPU. I tried streaming x265 content and it basically used 70-80% on all cores of my CPU, while on the other hand using my GPU it used the normal amount on the CPU (70-80% on a single core).
-
This will be the steps to install on an NVIDIA graphics card, specifically a GTX 1660 Ti. But more info and guides can be found at Jellyfin: Hardware Acceleration for other acceleration methods.
Ensure you have the NVIDIA drivers and utils installed. I’ve you’ve done this in the past then you can skip this part, else you might be using the default nouveau drivers. Follow the next steps to set up the NVIDIA drivers, which basically is a summary of NVIDIA: Installation for my setup, so double check the wiki in case you have an older NVIDIA graphics card.
-
Install the nvidia and nvidia-utils packages:
-
pacman -S nvidia nvidia-utils
-
-
Modify /etc/mkinitcpio.conf to remove kms from the HOOKS array. It should look like this (commented line is how it was for me before the change):
This provides the jellyfin-ffmpeg executable, which is necessary for Jellyfin to do hardware acceleration, it needs to be this specific one.
-
Then in the Jellyfin go to the transcoding settings by clicking on the 3 horizontal lines on the top left of the Web UI and navigating to Administration -> Dashboard -> Playback -> Transcoding and:
-
-
Change the Hardware acceleration from “None” to “Nvidia NVENC”.
-
Some other options will pop up, just make sure you enable “HEVC” (which is x265) in the list of Enable hardware encoding for:.
-
Scroll down and specify the ffmpeg path, which is /usr/lib/jellyfin-ffmpeg/ffmpeg.
-
-
Don’t forget to click “Save” at the bottom of the Web UI, it will ask if you want to enable hardware acceleration.
Bazarr is a companion for Sonarr and Radarr that manages and downloads subtitles.
-
Install from the AUR with yay:
-
yay -S bazarr
-
-
Add the bazarr user to the servarr group:
-
gpasswd -a bazarr servarr
-
-
The default port that Bazarr uses is 6767 for http (the one you need for the reverse proxy), and it has pre-configured the default ports for Radarr and Sonarr.
Add the following setting in the server block of the isos.conf:
-
server {
- # server_name and other directives
- ...
-
- # Increase http2 max sizes
- large_client_header_buffers 4 16k;
-
- # some other blocks like location blocks
- ...
-}
-
-
Then add the following location blocks in the isos.conf, where I’ll keep it as /bazarr/:
-
location /bazarr/ {
- proxy_pass http://127.0.0.1:6767/bazarr/; # change port if needed
- proxy_http_version 1.1;
-
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header Host $http_host;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection "Upgrade";
-
- proxy_redirect off;
-}
-# Allow the Bazarr API through if you enable Auth on the block above
-location /bazarr/api {
- auth_request off;
- proxy_pass http://127.0.0.1:6767/bazarr/api;
-}
-
-
This is taken from Bazarr: Reverse proxy help. Restart the nginx service for the changes to take effect:
This will start the service and create the default configs under /var/lib/bazarr. You need to change the base_url for the necessary services as they’re running under a reverse proxy and under subdirectories. Edit /var/lib/bazarr/config/config.ini:
Now https://isos.yourdomain.com/bazarr is accessible. Secure the instance right away by adding authentication under Settings -> General -> Security. I added the “Forms” option, just fill in the username and password then click on save changes on the top left of the page. You can restart the service again and check that it asks for login credentials. I also disabled Settings -> General -> Updates -> Automatic.
This doesn’t require much thinking and its up to personal preference, but I’ll list the ones I added:
-
-
OpenSubtitles.com: requires an account (the .org option is deprecated).
-
For a free account it only lets you download around 20 subtitles per day, and they contain ads. You could pay for a VIP account ($3 per month) and that will give you 1000 subtitles per day and no ads. But if you’re fine with 20 ads per day you can try to get rid of the ads by running an automated script. Such option can be found at brianspilner01/media-server-scripts: sub-clean.sh.
I’ve tested this setup for the following languages (with all default settings as stated in the guides):
-
-
English
-
Spanish
-
-
I tried with “Latin American Spanish” but they’re hard to find, those two work pretty good.
-
None of these require an Anti-Captcha account (which is a paid service), but I created one anyways in case I need it. Though you need to add credits to it (pretty cheap though) if you ever use it.
In the last couple of days I’ve been setting up a Komga server for manga downloaded using metafates/mangal (upcoming set up entry about it) and everything was fine so far until I tried to download One Piece from MangaDex of which mangal has a built-in scraper. Long story short the issue was that MangaDex’s API only allows requesting manga chapters on chunks of 500 and the way that was being handled was completely wrong, specifics can be found on my commit (and the subsequent minor fix commit).
-
I tried to do a PR, but the project hasn’t been active since Feb 2023 (same reason I didn’t even try to do PRs on the other repos) so I closed it and will start working on my own fork, probaly just merging everything Belphemur‘s fork has to offer, as he’s been working on mangal actively. I could probably just fork from him and/or just submit PR requests to him, but I think I saw some changes I didn’t really like, will have to look more into it.
-
Also, while trying to use some of the custom scrapers I ran into issues with the headless chrome explorer implementation where it didn’t close on each manga chapter download, causig my CPU and Mem usage to get maxed out and losing control of the system, so I had to also fork the metafates/mangal-lua-libs and “fixed” (I say fixed because that wasn’t the issue at the end, it was how the custom scrapers where using it, shitty documentation) the issue by adding the browser.Close() function to the headless Lua API (commit) and merged some commits from the original vadv/gopher-lua-libs just to include any features added to the Lua libs needed.
-
Finally I forked the metafates/mangal-scrapers (which I actually forked NotPhantomX‘s fork as they had included more scrapers from some pull requests) to be able to have updated custom Lua scrapers (in which I also fixed the headless bullshit) and use them on my mangal.
-
So, I went into the rabbit hole of manga scrapping because I wanted to set up my Komga server, and more importantly I had to quickly learn Go and Lua (Lua was easier) and I have to say that Go is super convoluted on the module management, all research I did lead me to totally different responses, but it is just because of different Go versions and the year of the responses.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/mail_server_with_postfix.html b/live/blog/a/mail_server_with_postfix.html
deleted file mode 100644
index defe607..0000000
--- a/live/blog/a/mail_server_with_postfix.html
+++ /dev/null
@@ -1,527 +0,0 @@
-
-
-
-
-
-
-Set up a Mail server with Postfix, Dovecot, SpamAssassin and OpenDKIM -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Set up a Mail server with Postfix, Dovecot, SpamAssassin and OpenDKIM
-
-
The entry is going to be long because it’s a tedious process. This is also based on Luke Smith’s script, but adapted to Arch Linux (his script works on debian-based distributions). This entry is mostly so I can record all the notes required while I’m in the process of installing/configuring the mail server on a new VPS of mine; also I’m going to be writing a script that does everything in one go (for Arch Linux), that will be hosted here. I haven’t had time to do the script so nevermind this, if I ever do it I’ll make a new entry regarding it.
-
This configuration works for local users (users that appear in /etc/passwd), and does not use any type of SQL database. Do note that I’m not running Postfix in a chroot, which can be a problem if you’re following my steps as noted by Bojan; in the case that you want to run in chroot then add the steps chown in the Arch wiki: Postfix in a chroot jail; the issue faced if following my steps and using a chroot is that there will be issues resolving the hostname due to /etc/hosts or /etc/hostname not being available in the chroot.
-
All commands executed here are run with root privileges, unless stated otherwise.
You will need a CNAME for “mail” and (optionally) “www.mail”, or whatever you want to call the sub-domains (although the RFC 2181 states that it NEEDS to be an A record, fuck the police).
-
An SSL certificate. You can use the SSL certificate obtained following my last post using certbot (just create a mail.conf and run certbot --nginx again).
-
Ports 25, 587 (SMTP), 465 (SMTPS), 143 (IMAP) and 993 (IMAPS) open on the firewall (I use ufw).
Postfix is a “mail transfer agent” which is the component of the mail server that receives and sends emails via SMTP.
-
Install the postfix package:
-
pacman -S postfix
-
-
We have two main files to configure (inside /etc/postfix): master.cf (master(5)) and main.cf (postconf(5)). We’re going to edit main.cf first either by using the command postconf -e 'setting' or by editing the file itself (I prefer to edit the file).
-
Note that the default file itself has a lot of comments with description on what each thing does (or you can look up the manual, linked above), I used what Luke’s script did plus some other settings that worked for me.
-
Now, first locate where your website cert is, mine is at the default location /etc/letsencrypt/live/, so my certdir is /etc/letsencrypt/live/luevano.xyz. Given this information, change {yourcertdir} on the corresponding lines. The configuration described below has to be appended in the main.cf configuration file.
-
Certificates and ciphers to use for authentication and security:
Specify the mailbox home, this is going to be a directory inside your user’s home containing the actual mail files, for example it will end up being/home/david/Mail/Inbox:
-
home_mailbox = Mail/Inbox/
-
-
Pre-configuration to work seamlessly with dovecot and opendkim:
Where {yourdomainname} is luevano.xyz in my case. Lastly, if you don’t want the sender’s IP and user agent (application used to send the mail), add the following line:
That’s it for main.cf, now we have to configure master.cf. This one is a bit more tricky.
-
First look up lines (they’re uncommented) smtp inet n - n - - smtpd, smtp unix - - n - - smtp and -o syslog_name=postfix/$service_name and either delete or uncomment them… or just run sed -i "/^\s*-o/d;/^\s*submission/d;/\s*smtp/d" /etc/postfix/master.cf as stated in Luke’s script.
-
Lastly, append the following lines to complete postfix setup and pre-configure for spamassassin.
-
smtp unix - - n - - smtp
-smtp inet n - y - - smtpd
- -o content_filter=spamassassin
-submission inet n - y - - smtpd
- -o syslog_name=postfix/submission
- -o smtpd_tls_security_level=encrypt
- -o smtpd_sasl_auth_enable=yes
- -o smtpd_tls_auth_only=yes
-smtps inet n - y - - smtpd
- -o syslog_name=postfix/smtps
- -o smtpd_tls_wrappermode=yes
- -o smtpd_sasl_auth_enable=yes
-spamassassin unix - n n - - pipe
- user=spamd argv=/usr/bin/vendor_perl/spamc -f -e /usr/sbin/sendmail -oi -f \${sender} \${recipient}
-
Before starting the postfix service, you need to run newaliases first, but you can do a bit of configuration beforehand editing the file /etc/postfix/aliases. I only change the root: you line (where you is the account that will be receiving “root” mail). After you’re done, run:
-
postalias /etc/postfix/aliases
-newaliases
-
-
At this point you’re done configuring postfix and you can already start/enable the postfix service:
Dovecot is an IMAP and POP3 server, which is what lets an email application retrieve the mail.
-
Install the dovecot and pigeonhole (sieve for dovecot) packages:
-
pacman -S dovecot pigeonhole
-
-
On arch, by default, there is no /etc/dovecot directory with default configurations set in place, but the package does provide the example configuration files. Create the dovecot directory under /etc and, optionally, copy the dovecot.conf file and conf.d directory under the just created dovecot directory:
As Luke stated, dovecot comes with a lot of “modules” (under /etc/dovecot/conf.d/ if you copied that folder) for all sorts of configurations that you can include, but I do as he does and just edit/create the whole dovecot.conf file; although, I would like to check each of the separate configuration files dovecot provides I think the options Luke provides are more than good enough.
-
I’m working with an empty dovecot.conf file. Add the following lines for SSL and login configuration (also replace {yourcertdir} with the same certificate directory described in the Postfix section above, note that the < is required):
You may notice we specify a file we don’t have under /etc/dovecot: dh.pem. We need to create it with openssl (you should already have it installed if you’ve been following this entry and the one for nginx). Just run (might take a few minutes):
-
openssl dhparam -out /etc/dovecot/dh.pem 4096
-
-
After that, the next lines define what a “valid user is” (really just sets the database for users and passwords to be the local users with their password):
Next, comes the mail directory structure (has to match the one described in the Postfix section). Here, the LAYOUT option is important so the boxes are .Sent instead of Sent. Add the next lines (plus any you like):
Where you need to change {yourdomain} and {yoursubdomain} (doesn’t really need to be the sub-domain, could be anything that describes your key) accordingly, for me it’s luevano.xyz and mail, respectively. After that, we need to create some files inside the /etc/opendkim directory. First, create the file KeyTable with the content:
And more, make sure to include your server IP and something like subdomain.domainname.
-
Next, edit /etc/opendkim/opendkim.conf to reflect the changes (or rather, addition) of these files, as well as some other configuration. You can look up the example configuration file located at /usr/share/doc/opendkim/opendkim.conf.sample, but I’m creating a blank one with the contents:
I’m using root:opendkim so opendkim doesn’t complain about the {yoursubdomani}.private being insecure (you can change that by using the option RequireSafeKeys False in the opendkim.conf file, as stated here).
-
That’s it for the general configuration, but you could go more in depth and be more secure with some extra configuration.
Add the following TXT records on your domain registrar (these examples are for Epik):
-
-
DKIM entry: look up your {yoursubdomain}.txt file, it should look something like:
-
-
{yoursubdomain}._domainkey IN TXT ( "v=DKIM1; k=rsa; s=email; "
- "p=..."
- "..." ) ; ----- DKIM key mail for {yourdomain}
-
-
In the TXT record you will place {yoursubdomain}._domainkey as the “Host” and "v=DKIM1; k=rsa; s=email; " "p=..." "..." in the “TXT Value” (replace the dots with the actual value you see in your file).
-
-
-
DMARC entry: just _dmarc.{yourdomain} as the “Host” and "v=DMARC1; p=reject; rua=mailto:dmarc@{yourdomain}; fo=1" as the “TXT Value”.
-
-
-
SPF entry: just @ as the “Host” and "v=spf1 mx a:{yoursubdomain}.{yourdomain} - all" as the “TXT Value”.
-
-
-
And at this point you could test your mail for spoofing and more.
Then, you can edit local.cf (located in /etc/mail/spamassassin) to fit your needs (I only uncommented the rewrite_header Subject ... line). And then you can run the following command to update the patterns and compile them:
And you could also execute sa-learn to train spamassassin‘s bayes filter, but this works for me. Then create the timer spamassassin-update.timer under the same directory, with the content:
Next, you may want to edit the spamassassin service before starting and enabling it, because by default, it could spawn a lot of “childs” eating a lot of resources and you really only need one child. Append --max-children=1 to the line ExecStart=... in /usr/bin/systemd/system/spamassassin.service:
We should have a working mail server by now. Before continuing check your journal logs (journalctl -xe --unit={unit}, where {unit} could be spamassassin.service for example) to see if there was any error whatsoever and try to debug it, it should be a typo somewhere because all the settings and steps detailed here just worked; I literally just finished doing everything on a new server as of the writing of this text, it just werks on my machine.
-
Now, to actually use the mail service: first of all, you need a normal account (don’t use root) that belongs to the mail group (gpasswd -a user group to add a user user to group group) and that has a password.
-
Next, to actually login into a mail app/program, you will use the following settings, at least for thunderdbird(I tested in windows default mail app and you don’t need a lot of settings):
-
-
* server: subdomain.domain (mail.luevano.xyz in my case)
-
SMTP port: 587
-
SMTPS port: 465 (I use this one)
-
IMAP port: 143
-
IMAPS port: 993 (again, I use this one)
-
Connection/security: SSL/TLS
-
Authentication method: Normal password
-
Username: just your user, not the whole email (david in my case)
-
Password: your user password (as in the password you use to login to the server with that user)
-
-
All that’s left to do is test your mail server for spoofing, and to see if everything is setup correctly. Go to DKIM Test and follow the instructions (basically click next, and send an email with whatever content to the email that they provide). After you send the email, you should see something like:
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/manga_server_with_komga.html b/live/blog/a/manga_server_with_komga.html
deleted file mode 100644
index f99d3c9..0000000
--- a/live/blog/a/manga_server_with_komga.html
+++ /dev/null
@@ -1,539 +0,0 @@
-
-
-
-
-
-
-Set up a manga server with Komga and mangal -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Set up a manga server with Komga and mangal
-
-
I’ve been wanting to set up a manga media server to hoard some mangas/comics and access them via Tachiyomi, but I didn’t have enough space in my vultr VPS. Now that I have symmetric fiber optic at home and my spare PC to use as a server I decided to go ahead and create one. As always, i use arch btw so these instructions are specifically for it, I’m not sure how easier/harder it is for other distros, I’m just too comfortable with arch honestly.
-
I’m going to run it as an exposed service using a subdomain of my own, so the steps are taking that into account, if you want to run it locally (or on a LAN/VPN) then it is going to be easier/with less steps (you’re on your own). Also, as you might notice I don’t like to use D*ck*r images or anything (ew).
-
At the time of editing this entry (06-28-2023) Komga has already upgraded to v.1.0.0 and it introduces some breaking changes if you already had your instance set up. Read more here. The only change I did here was changing the port to the new default.
-
As always, all commands are run as root unless stated otherwise.
Similar to my early tutorial entries, if you want it as a subdomain:
-
-
An A (and/or AAAA) or a CNAME for komga (or whatever you want).
-
An SSL certificate, if you’re following the other entries (specially the website entry), add a komga.conf and run certbot --nginx (or similar) to extend/create the certificate. More details below: Reverse proxy and SSL certificate.
This is the first time I mention the AUR (and yay) in my entries, so I might as well just write a bit about it.
-
The AUR is the Arch Linux User Repository and it’s basically like an extension of the official one which is supported by the community, the only thing is that it requires a different package manager. The one I use (and I think everyone does, too) is yay, which as far as I know is like a wrapper of pacman.
To install and use yay we need a normal account with sudo access, all the commands related to yay are run as normal user and then it asks for sudo password. Installation its straight forward: clone yay repo and install. Only dependencies are git and base-devel:
-
Install dependencies:
-
sudo pacman -S git base-devel
-
-
Clone yay and install it (I also like to delete the cloned git repo):
-
git clone git@github.com:Jguer/yay.git
-cd yay
-makepkg -si
-cd ..
-sudo rm -r yay
-
yay is used basically the same as pacman with the difference that it is run as normal user (then later requiring sudo password) and that it asks extra input when installing something, such as if we want to build the package from source or if we want to show package diffs.
-
To install a package (for example Komga in this blog entry), run:
As I mentioned in my past entry I had to forkmangal and related repositories to fix/change a few things. Currently the major fix I did in mangal is for the built in MangaDex scraper which had really annoying bug in the chunking of the manga chapter listing.
-
So instad of installing with yay we’ll build it from source. We need to have go installed:
-
pacman -S go
-
-
Then clone my fork of mangal and install it:
-
git clone https://github.com/luevano/mangal.git # not sure if you can use SSH to clone
-cd mangal
-make install # or just `make build` and then move the binary to somewhere in your $PATH
-
-
This will use go install so it will install to a path specified by the go environment variables, for more run go help install. It was installed to $HOME/.local/bin/go/mangal for me because my env vars, then just make sure this is included in your PATH.
-
Check it was correctly installed by running mangal version, which should print something like:
-
▇▇▇ mangal
-
- Version ...
- Git Commit ...
- Build Date ...
- Built By ...
- Platform ...
-
I’m going to do everything with a normal user (manga-dl) which I created just to download manga. So all of the commands will be run without sudo/root privileges.
-
Change some of the configuration options:
-
mangal config set -k downloader.path -v "/mnt/d/mangal" # downloads to current dir by default
-mangal config set -k formats.use -v "cbz" # downloads as pdf by default
-mangal config set -k installer.user -v "luevano" # points to my scrapers repository which contains a few extra scrapers and fixes, defaults to metafates' one; this is important if you're using my fork, don't use otherwise as it uses extra stuff I added
-mangal config set -k logs.write -v true # I like to get logs for what happens
-
-
Note: For testing purposes (if you want to explore mangal) set downloader.path once you’re ready to start to populate the Komga library directory (at Komga: populate manga library).
-
For more configs and to read what they’re for:
-
mangal config info
-
-
Also install the custom Lua scrapers by running:
-
mangal sources install
-
-
And install whatever you want, it picks up the sources/scrapers from the configured repository (installer.<key> config), if you followed, it will show my scrapers.
Before continuing, I gotta say I went through some bullshit while trying to use the custom Lua scrapers that use the headless browser (actually just a wrapper of go-rod/rod, and honestly it is not really a “headless” browser, mangal “documentation” is just wrong). For more on my rant check out my last entry.
-
There is no concrete documentation on the “headless” browser, only that it is automatically set up and ready to use… but it doesn’t install any library/dependency needed. I discovered the following libraries that were missing on my Arch minimal install:
I can’t guarantee that those are all the packages needed, those are the ones I happen to discover (had to fork the lua libs and add some logging because the error message was too fucking generic).
-
These dependencies are probably met by installing either chromedriver or google-chrome from the AUR (for what I could see on the package dependencies).
Download manga using the TUI by selecting the source/scrapper, search the manga/comic you want and then you can select each chapter to download (use tab to select all). This is what I use when downloading manga that already finished publishing, or when I’m just searching and testing out how it downloads the manga (directory name, and manga information).
-
Note that some scrapters will contain duplicated chapters, as they have multiple uploaded chapters from the community, usually for different scanlation groups. This happens a lot with MangaDex.
The inline mode is a single terminal command meant to be used to automate stuff or for more advanced options. You can peek a bit into the “documentation” which honestly it’s ass because it doesn’t explain much. The minimal command for inline according to the mangal help is:
But this will not produce anything because it also needs --source (or set the default using the config key downloader.default_sources) and either --json which basically just does the search and returns the result in json format or --download to actually download whatever is found; I recommend to do --json first to check that the correct manga will be downloaded then do --download.
-
Something not mentioned anywhere is the --manga flag options (found it at the source code), it has 3 available options:
-
-
first: first manga entry found for the search.
-
last: last manga entry found for the search.
-
exact: exact manga title match. This is the one I use.
-
-
Similar to --chapters, there are a few options not explained (that I found at the source code, too). I usually just use all but other options:
-
-
all: all chapters found in the chapter list.
-
first: first chapter found in the chapter list.
-
last: last chapter found in the chapter list
-
[from]-[to]: selector for the chapters found in the chapter list, index starts at 0.
-
If the selectors (from or to) exceed the amount of chapters in the chapterlist it just adjusts to he maximum available.
-
I had to fix this at the source code because if you wanted to to be the last chapter, it did to + 1 and it failed due to index out of range.
-
-
-
@[sub]@: not sure how this works exactly, my understanding is that it’s for “named” chapters.
Search first and make sure my command will pull the manga I want:
-
-
mangal inline --source "Mangapill" --manga "exact" --query "Kimetsu no Yaiba" --json | jq # I use jq to pretty format the output
-
-
-
I make sure the json output contains the correct manga information: name, url, etc..
-
-
-
You can also include the flag --include-anilist-manga to include anilist information (if any) so you can check that the correct anilist id is attached. If the correct one is not attached (and it exists) then you can bind the --query (search term) to a specific anilist id by running:
-
-
mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922
-
-
-
If I’m okay with the outputs, then I change --json for --download to actually download:
Check if the manga is downloaded correctly. I do this by going to my download directory and checking the directory name (I’m picky with this stuff), that all chapters where downloaded, that it includes a correct series.json file and it contains a cover.<img-ext>; this usually means it correctly pulled information from anilist and that it will contain metadata Komga will be able to use.
The straight forward approach for automation is just to bundle a bunch of mangal inline commands in a shell script and schedule it’s execution either via cron or systemd/Timers. But, as always, I overcomplicated/overengineered my approach, which is the following:
-
-
Group manga names per source.
-
Configure anything that should always be set before executing mangal, this includes anilist bindings.
-
Have a way to track the changes/updates on each run.
-
Use that tracker to know where to start downloading chapters from.
-
This is optional, as you can just do --chapters "all" and it will work but I do it mostly to keep the logs/output cleaner/shorter.
Function that handles the download per manga in the list:
-
mangal_src_dl () {
- source_name=$1
- manga_list=$(echo "$2" | tr '|' '\n')
-
- while IFS= read -r line; do
- # By default download all chapters
- chapters="all"
- last_chapter_n=$(grep -e "$line" "$TRACKER_FILE" | cut -d'|' -f2 | grep -v -e '^$' | tail -n 1)
- if [ -n "${last_chapter_n}" ]; then
- chapters="$last_chapter_n-9999"
- echo "Downloading [${last_chapter_n}-] chapters for $line from $source_name..."
- else
- echo "Downloading all chapters for $line from $source_name..."
- fi
- dl_output=$(mangal inline -S "$source_name" -q "$line" -m "exact" -F "$DOWNLOAD_FORMAT" -c "$chapters" -d)
-
- if [ $? -ne 0 ]; then
- echo "Failed to download chapters for $line."
- continue
- fi
-
- line_count=$(echo "$dl_output" | grep -v -e '^$' | wc -l)
- if [ $line_count -gt 0 ]; then
- echo "Downloaded $line_count chapters for $line:"
- echo "$dl_output"
- new_last_chapter_n=$(echo "$dl_output" | tail -n 1 | cut -d'[' -f2 | cut -d']' -f1)
- # manga_name|last_chapter_number|downloaded_chapters_on_this_update|manga_source
- echo "$line|$new_last_chapter_n|$line_count|$source_name" >> $TRACKER_FILE
- else
- echo "No new chapters for $line."
- fi
- done <<< "$manga_list"
-}
-
-
Where $TRACKER_FILE is just a variable holding a path to some file where you can store the tracking and $DOWNLOAD_FORMAT the format for the mangas, for me it’s cbz. Then the usage would be something like mangal_src_dl "Mangapill" "$mangapill", meaning that it is a function call per source.
-
A simpler function without “tracking” would be:
-
mangal_src_dl () {
- source_name=$1
- manga_list=$(echo "$2" | tr '|' '\n')
-
- while IFS= read -r line; do
- echo "Downloading all chapters for $line from $source_name..."
- mangal inline -S "$source_name" -q "$line" -m "exact" -F "$DOWNLOAD_FORMAT" -c "all" -d
- if [ $? -ne 0 ]; then
- echo "Failed to download chapters for $line."
- continue
- fi
- echo "Finished downloading chapters for $line."
- done <<< "$manga_list"
-}
-
-
The tracker file would have a format like follows:
-
# Updated: 06/10/23 10:53:15 AM CST
-Berserk|0392|392|Mangapill
-Dandadan|0110|110|Mangapill
-...
-
-
And note that if you already had manga downloaded and you run the script for the first time, then it will show as if it downloaded everything from the first chapter, but that’s just how mangal works, it will actually just discover downloaded chapters and only download anything missing.
-
Any configuration the downloader/updater might need needs to be done before the mangal_src_dl calls. I like to configure mangal for download path, format, etc.. I found that it is needed to clear the mangal and rod browser cache (headless browser used in some custom sources) from personal experience and from others: mangal#170 and kaizoku#89.
-
Also you should set any anilist binding necessary for the downloading (as the cache was cleared). An example of an anilist binding I had to do is for Mushoku Tensei, as it has both a light novel and manga version, which for me it’s the following binding:
Finally is just a matter of using your prefered way of scheduling, I’ll use systemd/Timers but anything is fine. You could make the downloader script more sophisticated and only running every week on which each manga gets (usually) released but that’s too much work; I’ll just run it once daily probably.
-
A feature I want to add and probably will is sending notifications (probably through email) on a summary for manga downloaded or failed to download so I’m on top of the updates. For now this is good enough and it’s been working so far.
This komga package creates a komga (service) user and group which is tied to the also included komga.service.
-
Configure it by editing /etc/komga.conf:
-
SERVER_PORT=25600
-SERVER_SERVLET_CONTEXT_PATH=/ # this depends a lot of how it's going to be served (domain, subdomain, ip, etc)
-
-KOMGA_LIBRARIES_SCAN_CRON="0 0 * * * ?"
-KOMGA_LIBRARIES_SCAN_STARTUP=false
-KOMGA_LIBRARIES_SCAN_DIRECTORY_EXCLUSIONS='#recycle,@eaDir,@Recycle'
-KOMGA_FILESYSTEM_SCANNER_FORCE_DIRECTORY_MODIFIED_TIME=false
-KOMGA_REMEMBERME_KEY=USE-WHATEVER-YOU-WANT-HERE
-KOMGA_REMEMBERME_VALIDITY=2419200
-
-KOMGA_DATABASE_BACKUP_ENABLED=true
-KOMGA_DATABASE_BACKUP_STARTUP=true
-KOMGA_DATABASE_BACKUP_SCHEDULE="0 0 */8 * * ?"
-
-
My changes (shown above):
-
-
cron schedules.
-
It’s not actually cron but rather a cron-like syntax used by Spring as stated in the Komga config.
If you’re going to run it locally (or LAN/VPN) you can start the komga.service and access it via IP at http://<your-server-ip>:<port>(/base_url) as stated at Komga: Accessing the web interface, then you can continue with the mangal section, else continue with the next steps for the reverse proxy and certificate.
Create the reverse proxy configuration (this is for nginx). In my case I’ll use a subdomain, so this is a new config called komga.conf at the usual sites-available/enabled path:
-
server {
- listen 80;
- server_name komga.yourdomain.com; # change accordingly to your wanted subdomain and domain name
-
- location / {
- proxy_pass http://localhost:25600; # change port if needed
- proxy_http_version 1.1;
-
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
-
- proxy_read_timeout 600s;
- proxy_send_timeout 600s;
- }
-}
-
-
If it’s going to be used as a subdir on another domain then just change the location with /subdir instead of /; be careful with the proxy_pass directive, it has to match what you configured at /etc/komga.conf for the SERVER_SERVLET_CONTEXT_PATH regardless of the /subdir you selected at location.
If using a subdir then the same certificate for the subdomain/domain should work fine and no extra stuff is needed, else if following along me then we can create/extend the certificate by running:
-
certbot --nginx
-
-
That will automatically detect the new subdomain config and create/extend your existing certificate(s). In my case I manage each certificate’s subdomain:
And access the web interface at https://komga.domainname.com which should show the login page for Komga. The first time it will ask to create an account as shown in Komga: Create user account, this will be an admin account. Fill in the email and password (can be changed later). The email doesn’t have to be an actual email, for now it’s just for management purposes.
-
Next thing would be to add any extra account (for read-only/download manga permissions), add/import libraries, etc.. For now I’ll leave it here until we start downloading manga on the next steps.
Creating a library is as simple as creating a directory somewhere and point to it in Komga. The following examples are for my use case, change accordingly. I’ll be using /mnt/d/mangal for my library (as stated in the mangal: configuration section):
-
mkdir /mnt/d/mangal
-
-
Where I chose the name mangal as its the name of the downloader/scrapper, it could be anything, this is just how I like to organize stuff.
-
For the most part, the permissions don’t matter much (as long as it’s readable by the komga user) unless you want to delete some manga, then komga user also needs write permissions.
-
Then just create the library in Komga web interface (the + sign next to Libraries), choose a name “Mangal” and point to the root folder /mnt/d/mangal, then just click Next, Next and Add for the defaults (that’s how I’ve been using it so far). This is well explained at Komga: Libraries.
-
The real important part (for me) is the permissions of the /mnt/d/mangal directory, as I want to have write access for komga so I can manage from the web interface itself. It looks like it’s just a matter of giving ownership to the komga user either for owner or for group (or to all for that matter), but since I’m going to use a separate user to download manga then I need to choose carefully.
The desired behaviour is: set komga as group ownership, set write access to group and whenever a new directory/file is created, inherit these permission settings. I found out via this stack exchange answer how to do it. So, for me:
-
chown manga-dl:komga /mnt/d/mangal # required for group ownership for komga
-chmod g+s /mnt/d/mangal # required for group permission inheritance
-setfacl -d -m g::rwx /mnt/d/mangal # default permissions for group
-setfacl -d -m o::rx /mnt/d/mangal # default permissions for other (as normal, I think this command can be excluded)
-
-
Where manga-dl is the user I created to download manga with. Optionally add -R flag to those 4 commands in case it already has subdirectories/files (this might mess file permissions, but it’s not an issue as far as I konw).
-
Checking that the permissions are set correctly (getfacl /mnt/d/mangal):
You can now start downloading manga using mangal either manually or by running the cron/systemd/Timers and it will be detected by Komga automatically when it scans the library (once every hour according to my config). You can manually scan the library, though, by clicking on the 3 dots to the right of the library name (in Komga) and click on “Scan library files”.
-
Then you can check that the metadata is correct (once the manga is fully indexed and metadata finished building), such as title, summary, chapter count, language, tags, genre, etc., which honestly it never works fine as mangal creates the series.json with the comicId field with an upper case I and Komga expects it to be a lower case i (comicid) so it falls back to using the info from the first chapter. I’ll probably will fix this on mangal side, and see how it goes.
-
So, what I do is manually edit the metadata for the manga, by changing whatever it’s wrong or add what’s missing (I like adding anilist and MyAnimeList links) and then leave it as is. This is up to you.
Just for the record, here is a list of downloaders/scrapers I considered before starting to use mangal:
-
-
kaizoku: NodeJS web server that uses mangal for its “backend” and honestly since I liked mangal so much I should use it, the only reason I don’t is because I’m a bitch and I don’t want to use a D*ck*r image and NodeJS (ew) (in general is pretty bloated in my opinion). If I get tired of my solution with pure mangal I might as well just migrate to it as It’s a more automatic solution.
-
manga-py: Python CLI application that’s a really good option as far as I’ve explored it, I’m just not using it yet as mangal has been really smooth and has everything I need, but will definitely explore it in the future if I need to. The cool thing out of the box is the amount of sources it can scrape from (somethign lacking from mangal).
-
mylar3: Python web server that should be the easier way to download manga with once correctly set up, but I guess I’m too dumb and don’t know how to configure it. Looks like you need to have access to specific private torrent trackers or whatever the other ways to download are, I just couldn’t figure out how to set it up and for public torrent stuff everything will be all over the place, so this was no option for me at the end.
-
-
Others:
-
-
HakuNeku: It looks pretty easy to use and future rich, only thing is that it’s not designed for headless servers, just a normal app. So this is also not an option for me. You could use it on your computer and rsync to your server or use some other means to upload to your server (a nono for me).
-
FMD: No fucking idea on how to use it and it’s not been updated since 2019, just listing it here as an option if it interests you.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/new_blogging_system.html b/live/blog/a/new_blogging_system.html
deleted file mode 100644
index 61b81d3..0000000
--- a/live/blog/a/new_blogging_system.html
+++ /dev/null
@@ -1,156 +0,0 @@
-
-
-
-
-
-
-I'm using a new blogging system -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
I'm using a new blogging system
-
-
So, I was tired of working with ssg (and then sbg which was a modified version of ssg that I “wrote”), for one general reason: not being able to extend it as I would like; and not just dumb little stuff, I wanted to be able to have more control, to add tags (which another tool that I found does: blogit), and even more in a future.
-
The solution? Write a new program “from scratch” in pYtHoN. Yes it is bloated, yes it is in its early stages, but it works just as I want it to work, and I’m pretty happy so far with the results and have with even more ideas in mind to “optimize” and generally clean my wOrKfLoW to post new blog entries. I even thought of using it for posting into a “feed” like gallery for drawings or pictures in general.
-
I called it pyssg, because it sounds nice and it wasn’t taken in the PyPi. It is just a terminal program that reads either a configuration file or the options passed as flags when calling the program.
-
It still uses Markdown files because I find them very easy to work with. And instead of just having a “header” and a “footer” applied to each parsed entry, you will have templates (generated with the program) for each piece that I thought made sense (idea taken from blogit): the common header and footer, the common header and footer for each entry and, header, footer and list elements for articles and tags. When parsing the Markdown file these templates are applied and stitched together to make a single HTML file. Also generates an RSS feed and the sitemap.xml file, which is nice.
-
It might sound convoluted, but it works pretty well, with of course room to improve; I’m open to suggestions, issue reporting or direct contributions here. For now, it is only tested on Linux (and don’t think on making it work on windows, but feel free to do PR for the compatibility).
Update: Since writing this entry, pyssg has evolved quite a bit, so not everything described here is still true. For the latest updates check the newest entries or the git repository itself.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/password_manager_authenticator_setup.html b/live/blog/a/password_manager_authenticator_setup.html
deleted file mode 100644
index 8f17596..0000000
--- a/live/blog/a/password_manager_authenticator_setup.html
+++ /dev/null
@@ -1,160 +0,0 @@
-
-
-
-
-
-
-My setup for a password manager and MFA authenticator -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
My setup for a password manager and MFA authenticator
-
-
Disclaimer: I won’t go into many technical details here of how to install/configure/use the software, this is just supposed to be a short description on my setup.
-
It’s been a while since I started using a password manager at all, and I’m happy that I started with KeePassXC (open source, multiplatform password manager that it’s completely offline) as a direct recommendation from EL ELE EME; before this I was using the same password for everything (like a lot of people), which is a well know privacy issue as noted in detail by Leo (I don’t personally recommed LastPass as Leo does). Note that you will still need a master password to lock/unlock your password database (you can additionally use a hardware key and a key file).
-
Anyways, setting up keepass is pretty simple, as there is a client for almost any device; note that keepass is basically just the format and the base for all of the clients, as its common with pretty much any open source software. In my case I’m using KeePassXC in my computer and KeePassDX in my phone (Android). The only concern is keeping everything in sync because keepass doesn’t have any automatic method of synchronizing between devices because of security reasons (as far as I know), meaning that you have to manage that yourself.
-
Usually you can use something like G**gl* drive, dropbox, mega, nextcloud, or any other cloud solution that you like to sync your keepass database between devices; I personally prefer to use Syncthing as it’s open source, it’s really easy to setup and has worked wonders for me since I started using it, also it keeps versions of your files that can serve as backups in any scenario where the database gets corrupted or something.
-
Finally, when I went through the issue with the micro SD and the adoptable storage bullshit (you can find the rant here, in spanish) I had to also migrate from G**gl* authenticator (gauth) to something else for the simple reason that gauth doesn’t even let you do backups, nor it’s synched with your account… nothing, it is just standalone and if you ever lose your phone you’re fucked; so I decided to go with Aegis authenticator, as it is open source, you have control over all your secret keys, you can do backups directly to the filesystem, you can secure your database with an extra password, etc., etc.. In general aegis is the superior MFA authenticator (at least compared with gauth) and everything that’s compatible with gauth is compatible with aegis as the format is a standard (as a matter of fact, keepass also has this MFA feature which is called TOPT and is also compatible, but I prefer to have things separate). I also use syncthing to keep a backup of my aegis database.
-
TL;DR:
-
-
Syncthing to sync files between devices (for the password databases).
-
KeePassXC for the password manager in my computer.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/pastebin_alt_with_privatebin.html b/live/blog/a/pastebin_alt_with_privatebin.html
deleted file mode 100644
index ef62906..0000000
--- a/live/blog/a/pastebin_alt_with_privatebin.html
+++ /dev/null
@@ -1,401 +0,0 @@
-
-
-
-
-
-
-Set up a pastebin alternative with PrivateBin and YOURLS -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Set up a pastebin alternative with PrivateBin and YOURLS
-
-
I learned about PrivateBin a few weeks back and ever since I’ve been looking into installing it, along with a URL shortener (a service I wanted to self host since forever). It took me a while as I ran into some problems while experimenting and documenting all the necessary bits in here.
-
My setup is exposed to the public, and as always is heavily based on previous entries as described in Prerequisites. Descriptions on setting up MariaDB (preferred MySQL replacement for Arch) and PHP are written in this entry as this is the first time I’ve needed them.
-
Everything here is performed in arch btw and all commands should be run as root unless stated otherwise.
To use mariadb simply run the command and it will try to login with the corresponding linux user running it. The general login command is:
-
mariadb -u <username> -p <database_name>
-
-
The database_name is optional. It will prompt a password input field.
-
Using mariadb as root, create users with their respective database if needed with the following queries:
-
MariaDB> CREATE USER '<username>'@'localhost' IDENTIFIED BY '<password>';
-MariaDB> CREATE DATABASE <database_name>;
-MariaDB> GRANT ALL PRIVILEGES ON <database_name>.* TO '<username>'@'localhost';
-MariaDB> quit
-
-
The database_name will depend on how YOURLS and PrivateBin are configured, that is if the services use a separate database and/or table prefixes are used.
PHP is a general-purpose scripting language that is usually used for web development, which was supposed to be ass for a long time but it seems to be a misconseption from the old times.
The default configuration file is self explanatory, it is located at /etc/webapps/yourls/config.php. Make sure to correctly set the user/database YOURLS will use and either create a cookie or get one from URL provided.
-
It is important to change the $yours_user_passwords variable, YOURLS will hash the passwords on login so it is not stored in plaintext. Password hashing can be disabled with:
-
define( 'YOURLS_NO_HASH_PASSWORD', true );
-
-
I also changed the “shortening method” to 62 to include more characters:
-
define( 'YOURLS_URL_CONVERT', 62 );
-
-
The $yourls_reserved_URL variable will need more blacklisted words depending on the use-case. Make sure the YOURLS_PRIVATE variable is set to true (default) if the service will be exposed to the public.
The admin area is located at https://short.example.com/admin/, login with any of the configured users set with the $yours_user_passwords in the config. Activate plugins by going to the “Manage Plugins” page (located at the top left) and clicking in the respective “Activate” button by hovering the “Actin” column, as shown below:
-
-
I personally activated the “Random ShortURLs” and “Allow Hyphens in Short URLs”. Once the “Random ShortURLs” plugin is activated it can be configured by going to the “Random ShortURLs Settings” page (located at the top left, right below “Manage Plugins”), only config available is “Random Keyword Length”.
-
The main admin area can be used to manually shorten any link provided, by using the automatic shortening or by providing a custom short URL.
-
Finally, the “Tools” page (located at the top left) conains the signature token, used for YOURLS: Passwordless API as well as useful bookmarklets for URL shortening while browsing.
The most important changes needed are basepath according to the privatebin URL and the [model] and [model_options] to use MySQL instead of plain filesystem files:
-
[model]
-; example of DB configuration for MySQL
-class = Database
-[model_options]
-dsn = "mysql:host=localhost;dbname=privatebin;charset=UTF8"
-tbl = "privatebin_" ; table prefix
-usr = "privatebin"
-pwd = "<password>"
-opt[12] = true ; PDO::ATTR_PERSISTENT
-
-
Any other [model] or [model_options] needs to be commented out (for example, the default filesystem setting).
I recommend creating a separate user for privatebin in yourls by modifying the $yours_user_passwords variable in yourls config file. Then login with this user and get the signature from the “Tools” section in the admin page, for more: YOURLS: Passwordless API.
-
For a “private” yourls installation (that needs username/pasword), set urlshortener:
I’ve been wanting to change the way pyssg reads config files and generates HTML files so that it is more flexible and I don’t need to have 2 separate build commands and configs (for blog and art), and also to handle other types of “sites”; because pyssg was built with blogging in mind, so it was a bit limited to how it could be used. So I had to kind of rewritepyssg, and with the latest version I can now generate the whole site and use the same templates for everything, quite neat for my use case.
-
Anyways, so I bought a new domain for all pyssg related stuff, mostly because I wanted somewhere to test live builds while developing, it is of course pyssg.xyz; as of now it is the same template, CSS and scripts that I use here, probably will change in the future. I’ll be testing new features and anything pyssg related stuff.
-
I should start pointing all links to pyssg to the actual site instead of the github repository (or my git repository), but I haven’t decided how to handle everything.
Así es, tenía un poco descuidado este pex, siendo la razón principal que andaba ocupado con cosas de la vida profesional, ayay. Pero ya que ando un poco más despejado y menos estresado voy a seguir usando el blog y a ver qué más hago.
-
Tengo unas entradas pendientes que quiero hacer del estilo de “tutorial” o “how-to”, pero me lo he estado debatiendo, porque Luke ya empezó a hacerlo más de verdad en landchad.net, lo cual recomiendo bastante pues igual yo empecé a hacer esto por él (y por EL ELE EME); aunque la verdad pues es muy específico a como él hace las cosas y quizá sí puede haber diferencias, pero ya veré en estos días. La próxima que quiero hacer es sobre el VPN, porque no lo he setupeado desde que reinicié El Página Web y La Servidor, entonces acomodaré el VPN de nuevo y de pasada tiro entrada de eso.
-
También dejé un dibujo pendiente, que la neta lo dejé por 2 cosas: está bien cabrón (porque también lo quiero colorear) y porque estaba ocupado; de lo cuál ya sólo queda el está bien cabrón pero no he tenido el valor de retomarlo. Lo triste es que ya pasó el tiempo del hype y ya no tengo mucha motivación para terminarlo más que el hecho de que cuando lo termine empezaré a usar Clip Studio Paint en vez de Krita, porque compré una licencia ahora que estuvo en 50% de descuento.
-
Algo bueno es que me he estado sintiendo muy bien conmigo mismo últimamente, aunque casi no hable de eso. Sí hay una razón en específico, pero es una razón algo tonta. Espero así siga.
-
Ah, y también quería acomodarme una sección de comentarios, pero como siempre, todas las opciones están bien bloated, entonces pues me voy a hacer una en corto seguramente en Python para el back, MySQL para la base de datos y Javascript para la conexión acá en el front, algo tranqui. Nel, siempre no ocupo esto, pa’ qué.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/torrenting_with_qbittorrent.html b/live/blog/a/torrenting_with_qbittorrent.html
deleted file mode 100644
index 8cd9dae..0000000
--- a/live/blog/a/torrenting_with_qbittorrent.html
+++ /dev/null
@@ -1,411 +0,0 @@
-
-
-
-
-
-
-Set up qBitTorrent with Jackett for use with Starr apps -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Set up qBitTorrent with Jackett for use with Starr apps
-
-
Riding on my excitement of having a good internet connection and having setup my home server now it’s time to self host a media server for movies, series and anime. I’ll setup qBitTorrent as the downloader, Jackett for the trackers, the Starr apps for the automatic downloading and Jellyfin as the media server manager/media viewer. This was going to be a single entry but it ended up being a really long one so I’m splitting it, this being the first part.
-
I’ll be exposing my stuff on a subdomain only so I can access it while out of home and for SSL certificates (not required), but shouldn’t be necessary and instead you can use a VPN (how to set up). For your reference, whenever I say “Starr apps” (*arr apps) I mean the family of apps such as Sonarr, Radarr, Bazarr, Readarr, Lidarr, etc..
-
Most of my config is based on the TRaSH-Guides (will be mentioned as “TRaSH” going forward). Specially get familiar with the TRaSH: Native folder structure and with the TRaSH: Hardlinks and instant moves. Will also use the default configurations based on the respective documentation for each Starr app and service, except when stated otherwise.
-
Everything here is performed in arch btw and all commands should be run as root unless stated otherwise.
-
Kindly note that I do not condone the use of torrenting for illegal activities. I take no responsibility for what you do when setting up anything shown here. It is for you to check your local laws before using automated downloaders such as Sonarr and Radarr.
The specific programs are mostly recommendations, if you’re familiar with something else or want to change things around, feel free to do so but everything will be written with them in mind.
-
If you want to expose to a (sub)domain, then similar to my early tutorial entries (specially the website for the reverse proxy plus certificates):
An A (and/or AAAA) or a CNAME for isos (or whatever you want to call it).
-
For automation software (qBitTorrent, Jackett, Starr apps, etc.). One subdomain per service can be used instead.
-
-
-
-
Note: I’m using the explicit 127.0.0.1 ip instead of localhost in the reverse proxies/services config as localhost resolves to ipv6 sometimes which is not configured on my server correctly. If you have it configured you can use localhost without any issue.
The desired behaviour is: set servarr as group ownership, set write access to group and whenever a new directory/file is created, inherit these permission settings. servarr is going to be a service user and I’ll use the root of a mounted drive at /mnt/a.
-
-
Create a service user called servarr (it could just be a group, too):
Jackett is a “proxy server” (or “middle-ware”) that translates queries from apps (such as the Starr apps in this case) into tracker-specific http queries. Note that there is an alternative called Prowlarr that is better integrated with most if not all Starr apps, requiring less maintenance; I’ll still be sticking with Jackett, though.
-
Install from the AUR with yay:
-
yay -S jackett
-
-
I’ll be using the default 9117 port, but change accordingly if you decide on another one.
I’m going to have most of the services under the same subdomain, with different subdirectories. Create the config file isos.conf at the usual sites-available/enabled path for nginx:
-
server {
- listen 80;
- server_name isos.yourdomain.com;
-
- location /jack { # you can change this to jackett or anything you'd like, but it has to match the jackett config on the next steps
- proxy_pass http://127.0.0.1:9117; # change the port according to what you want
-
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-Host $http_host;
- proxy_redirect off;
- }
-}
-
-
This is the basic reverse proxy config as shown in Jackett: Running Jackett behind a reverse proxy. The rest of the services will be added under different location block on their respective steps.
That will automatically detect the new subdomain config and create/extend your existing certificate(s). Restart the nginx service for changes to take effect:
It will autocreate the default configuration under /var/lib/jackett/ServerConfig.json, which you need to edit at least to change the BasePathOverride to match what you used in the nginx config:
Also modify the Port if you changed it. Restart the jackett service:
-
systemctl restart jackett.service
-
-
It should now be available at https://isos.yourdomain.com/jack. Add an admin password right away by scroll down and until the first config setting; don’t forget to click on “Set Password”. Change any other config you want from the Web UI, too (you’ll need to click on the blue “Apply server settings” button).
-
Note that you need to set the “Base URL override” to http://127.0.0.1:9117 (or whatever port you used) so that the “Copy Torznab Feed” button works for each indexer.
For Jackett, an indexer is just a configured tracker for some of the commonly known torrent sites. Jackett comes with a lot of pre-configured public and private indexers which usually have multiple URLs (mirrors) per indexer, useful when the main torrent site is down. Some indexers come with extra features/configuration depending on what the site specializes on.
-
To add an indexer click on the “+ Add Indexer” at the top of the Web UI and look for indexers you want, then click on the “+” icon on the far-most right for each indexer or select the ones you want (clicking on the checkbox on the far-most left of the indexer) and scroll all the way to the bottom to click on “Add Selected”. They then will show as a list with some available actions such as “Copy RSS Feed”, “Copy Torznab Feed”, “Copy Potato Feed”, a button to search, configure, delete and test the indexer, as shown below:
-
-
You can manually test the indexers by doing a basic search to see if they return anything, either by searching on each individual indexer by clicking on the magnifying glass icon on the right of the indexer or clicking on “Manual Search” button which is next to the “+ Add Indexer” button at the top right.
-
Explore each indexer’s configuration in case there is stuff you might want to change.
FlareSolverr is used to bypass certain protection that some torrent sites have. This is not 100% necessary and only needed for some trackers sometimes, it also doesn’t work 100%.
-
You could install from the AUR with yay:
-
yay -S flaresolverr-bin
-
-
At the time of writing, the flaresolverr package didn’t work for me because of python-selenium. flaresolverr-bin was updated around the time I was writing this, so that is what I’m using and what’s working fine so far, it contains almost everything needed (it has self contained libraries) except for xfvb.
-
Install dependencies via pacman:
-
pacman -S xorg-server-xvfb
-
-
You can now start/enable the flaresolverr.service:
Verify that the service started correctly by checking the logs:
-
journalctl -fxeu flaresolverr
-
-
It should show “Test successful” and “Serving on http://0.0.0.0:8191” (which is the default). Jackett now needs to be configured by adding http://127.0.0.1:8191 almost at the end in the “FlareSolverr API URL” field, then click on the blue “Apply server settings” button at the beginning of the config section. This doesn’t need to be exposed or anything, it’s just an internal API that Jackett (or anything you want) will use.
qBitTorrent is a fast, stable and light BitTorrent client that comes with many features and in my opinion it’s a really user friendly client and my personal choice for years now. But you can choose whatever client you want, there are more lightweight alternatives such as Transmission.
-
Install the qbittorrent-nox package (“nox” as in “no X server”):
-
pacman -S qbittorrent-nox
-
-
By default the package doesn’t create any (service) user, but it is recommended to have one so you can run the service under it. Create the user, I’ll call it qbittorrent and leave it with the default homedir (/home):
-
useradd -r -m qbittorrent
-
-
Add the qbittorrent user to the servarr group:
-
gpasswd -a qbittorrent servarr
-
-
Decide a port number you’re going to run the service on for the next steps, the default is 8080 but I’ll use 30000; it doesn’t matter much, as long as it matches for all the config. This is the qbittorrent service port, it is used to connect to the instance itself through the Web UI or via API, you also need to open a port for listening to peer connections. Choose any port you want, for example 50000 and open it with your firewall, ufw in my case:
This will start qbittorrent using default config. You need to change the port (in my case to 30000) and set qbittorrent to restart on exit (the Web UI has a close button). I guess this can be done before enabling/starting the service, but either way let’s create a “drop-in” file by “editing” the service:
Which will bring up a file editing mode containing the service unit and a space where you can add/override anything, add:
-
[Service]
-Environment="QBT_WEBUI_PORT=30000" # or whatever port number you want
-Restart=on-success
-RestartSec=5s
-
-
When exiting from the file (if you wrote anything) it will create the override config. Restart the service for changes to take effect (you might be asked to reload the systemd daemon):
You can now head to https://isos.yourdomain.com/qbt/ and login with user admin and password adminadmin (by default). Change the default password right away by going to Tools -> Options -> Web UI -> Authentication. The Web UI is basically the same as the normal desktop UI so if you’ve used it it will feel familiar. From here you can threat it as a normal torrent client and even start using for other stuff other than the specified here.
It should be usable already but you can go further and fine tune it, specially to some kind of “convention” as shown in TRaSH: qBitTorrent basic setup and subsequent qbittorrent guides.
-
I use all the suggested settings by TRaSH, where the only “changes” are for personal paths, ports, and in general connection settings that depend on my setup. The only super important setting I noticed that can affect our setup (meaning what is described in this entry) is the Web UI -> Authentication for the “Bypass authentication for clients on localhost”. This will be an issue because the reverse proxy is accessing qbittorrent via localhost, so this will make the service open to the world, experiment at your own risk.
-
Finally, add categories by following TRaSH: qBitTorrent how to add categories, basically right clicking on Categories -> All (x) (located at the left of the Web UI) and then on “Add category”; I use the same “Category” and “Save Path” (tv and tv, for example), where the “Save Path” will be a subdirectory of the configured global directory for torrents (TRaSH: qBitTorent paths and categories breakdown). I added 3: tv, movies and anime.
Often some of the trackers that come with torrents are dead or just don’t work. You have the option to add extra trackers to torrents either by:
-
-
Automatically add a predefined list on new torrents: configure at Tools -> Options -> BitTorrent, enable the last option “Automatically add these trackers to new downloads” then add the list of trackers. This should only be done on public torrents as private ones might ban you or something.
-
Manually add a list of trackers to individual torrents: configure by selecting a torrent, clicking on Trackers on the bottom of the Web UI, right clicking on an empty space and selecting “Add trackers…” then add the list of trackers.
-
-
On both options, the list of trackers need to have at least one new line in between each new tracker. You can find trackers from the following sources:
Both sources maintain an updated list of trackers. You also might need to enable an advanced option so all the new trackers are contacted (Only first tracker contacted): configure at Tools -> Options -> Advanced -> libtorrent Section and enable both “Always announce to all tiers” and “Always announce to all trackers in a tier”.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/updated_pyssg_pymdvar_and_website.html b/live/blog/a/updated_pyssg_pymdvar_and_website.html
deleted file mode 100644
index 291a170..0000000
--- a/live/blog/a/updated_pyssg_pymdvar_and_website.html
+++ /dev/null
@@ -1,152 +0,0 @@
-
-
-
-
-
-
-Updated pyssg to include pymdvar and the website -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Updated pyssg to include pymdvar and the website
-
-
Again, I’ve updated pyssg to add a bit of unit-testing as well as to include my extension pymdvar which is used to convert ${some_variables} into their respective values based on a config file and/or environment variables. With this I also updated a bit of the CSS of the site as well as basically all the entries and base templates, a much needed update (for me, because externally doesn’t look like much). Along with this I also added a “return to top” button, once you scroll enough on the site, a new button appears on the bottom right to get back to the top, also added table of contents to entries taht could use them (as well as a bit of CSS to them).
-
This update took a long time because I had a fundamental issue with how I was managing the “static” website, where I host all assets such as CSS, JS, images, etc.. Because I was using the <base> HTML tag. The issue is that this tag affects everything and there is no “opt-out” on some body tags, meaning that I would have to write the whole URL for all static assets. So I tried looking into changing how the image extension for python-markdown works, so that it includes this “base” URL I needed. But it was too much hassle, so I ended up developing my own extension mentioned earlier. Just as a side note, I noticed that my extension doesn’t cover all my needs, so probably it wont cover yours, if you end up using it just test it out a bit yourself and then go ahead, PRs are welcomed.
-
One thing led to another so I ended up changing a lot of stuff, and with changes comes tireness and eded up leaving the project for a while (again). This also led to not wanting to write or add anything else to the site until I sorted things out. But I’m again reviving it I guess, and up to the next cycle.
-
The next things I’ll be doing are continuing with my @gamedev journey and probably upload some drawings if I feel like doing some.
One of the main reasons I started “blogging” was basically just to document how I set up stuff up so I can reference them later in the future if I ever needed to replicate the steps or just to show somebody, and these entries had helped to do so multiple times. I’ll keep creating these entries but after a while the Creating a title started to feel weird, because we’re not creating anything really, it is just a set up/configuration/how-to/etc. So I think that using Set up a for the titles is better and makes more sense; probably using How to set up a is better for the SEO bullshit.
-
Anyways, so I’ll start using Set up a instead of Creating a and will retroactively change the titles for these entries (by this entry the change should be applied already). This might impact some RSS feeds as they keep up a cache of the feed and might duplicate the entries, heads up if for some reason somebody is using it.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/volviendo_a_usar_la_pagina.html b/live/blog/a/volviendo_a_usar_la_pagina.html
deleted file mode 100644
index 0c713ca..0000000
--- a/live/blog/a/volviendo_a_usar_la_pagina.html
+++ /dev/null
@@ -1,152 +0,0 @@
-
-
-
-
-
-
-Volviendo a usar la página -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Volviendo a usar la página
-
-
Después de mucho tiempo de estar luchando con querer volver a usar este pex (maldita d word y demás), ya me volví a acomodar el setup para agregar nuevas entradas.
-
Entre las cosas que tuve que hacer fue actualizar el pyssg porque no lo podía usar de una como estaba; y de pasado le agregue una que otra feature nueva. Luego quiero agregarle más funcionalidad para poder buildear la página completa; por ahora se hace en segmentos: todo lo de luevano.xyz está hecho manual, mientras que blog y art usan pyssg.
-
Otra cosa es que quizá me devuelva a editar alguans entradas nada más para homogeneizar las entradas específicas a Create a… (tiene más sentido que sean Setup x… o algo similar).
-
En otras noticias, estoy muy agusto en el jale que tengo actualmente aunque lleve alrededor de 3 semanas de un infierno en el jale. Debo pensar en si debo omitir cosas personales o del trabajo aquí, ya que quién sabe quién se pueda llegar a topar con esto *thinking emoji*.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/vpn_server_with_openvpn.html b/live/blog/a/vpn_server_with_openvpn.html
deleted file mode 100644
index 8456352..0000000
--- a/live/blog/a/vpn_server_with_openvpn.html
+++ /dev/null
@@ -1,446 +0,0 @@
-
-
-
-
-
-
-Set up a VPN server with OpenVPN -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Set up a VPN server with OpenVPN
-
-
I’ve been wanting to do this entry, but had no time to do it since I also have to set up the VPN service as well to make sure what I’m writing makes sense, today is the day.
This will be installed and working alongside the other stuff I’ve wrote about on other posts (see the server tag). All commands here are executes as root unless specified otherwise. Also, this is intended only for IPv4 (it’s not that hard to include IPv6, but meh). As always, all commands are executed as root unless stated otherwise.
Working server with root access, and with ufw as the firewall.
-
Open port 1194 (default), or as a fallback on 443 (click here for more). I will do mine on port 1194 but it’s just a matter of changing 2 lines of configuration and one ufw rule.
PKI stands for Public Key Infrastructure and basically it’s required for certificates, private keys and more. This is supposed to work between two servers and one client: a server in charge of creating, signing and verifying the certificates, a server with the OpenVPN service running and the client making the request.
-
In a nutshel, this is supposed to work something like: 1) a client wants to use the VPN service, so it creates a requests and sends it to the signing server, 2) this server checks the requests and signs the request, returning the certificates to both the VPN service and the client and 3) the client can now connect to the VPN service using the signed certificate which the OpenVPN server knows about.
-
That’s how the it should be st up… but, to be honest, all of this is a hassle and (in my case) I want something simple to use and manage. So I’m gonna do all on one server and then just give away the configuration file for the clients, effectively generating files that anyone can run and will work, meaning that you need to be careful who you give this files (it also comes with a revoking mechanism, so no worries).
OpenVPN is a robust and highly flexible VPN daemon, that’s pretty complete feature-wise.
-
Install the openvpn package:
-
pacman -S openvpn
-
-
Now, most of the stuff is going to be handled by (each, if you have more than one) server configuration. This might be the hardest thing to configure, but I’ve used a basic configuration file that worked a lot to me, which is a compilation of stuff that I found on the internet while configuring the file a while back.
-
# Server ip addres (ipv4).
-local 1.2.3.4 # your server public ip
-
-# Port.
-port 1194 # Might want to change it to 443
-
-# TCP or UDP.
-;proto tcp
-proto udp # If ip changes to 443, you should change this to tcp, too
-
-# "dev tun" will create a routed IP tunnel,
-# "dev tap" will create an ethernet tunnel.
-;dev tap
-dev tun
-
-# Server specific certificates and more.
-ca /etc/easy-rsa/pki/ca.crt
-cert /etc/easy-rsa/pki/issued/server.crt
-key /etc/easy-rsa/pki/private/server.key # This file should be kept secret.
-dh /etc/openvpn/server/dh.pem
-auth SHA512
-tls-crypt /etc/openvpn/server/ta.key 0 # This file is secret.
-crl-verify /etc/easy-rsa/pki/crl.pem
-
-# Network topology.
-topology subnet
-
-# Configure server mode and supply a VPN subnet
-# for OpenVPN to draw client addresses from.
-server 10.8.0.0 255.255.255.0
-
-# Maintain a record of client <-> virtual IP address
-# associations in this file.
-ifconfig-pool-persist ipp.txt
-
-# Push routes to the client to allow it
-# to reach other private subnets behind
-# the server.
-;push "route 192.168.10.0 255.255.255.0"
-;push "route 192.168.20.0 255.255.255.0"
-
-# If enabled, this directive will configure
-# all clients to redirect their default
-# network gateway through the VPN, causing
-# all IP traffic such as web browsing and
-# and DNS lookups to go through the VPN
-push "redirect-gateway def1 bypass-dhcp"
-
-# Certain Windows-specific network settings
-# can be pushed to clients, such as DNS
-# or WINS server addresses.
-# Google DNS.
-;push "dhcp-option DNS 8.8.8.8"
-;push "dhcp-option DNS 8.8.4.4"
-
-# The keepalive directive causes ping-like
-# messages to be sent back and forth over
-# the link so that each side knows when
-# the other side has gone down.
-keepalive 10 120
-
-# The maximum number of concurrently connected
-# clients we want to allow.
-max-clients 5
-
-# It's a good idea to reduce the OpenVPN
-# daemon's privileges after initialization.
-user nobody
-group nobody
-
-# The persist options will try to avoid
-# accessing certain resources on restart
-# that may no longer be accessible because
-# of the privilege downgrade.
-persist-key
-persist-tun
-
-# Output a short status file showing
-# current connections, truncated
-# and rewritten every minute.
-status openvpn-status.log
-
-# Set the appropriate level of log
-# file verbosity.
-#
-# 0 is silent, except for fatal errors
-# 4 is reasonable for general usage
-# 5 and 6 can help to debug connection problems
-# 9 is extremely verbose
-verb 3
-
-# Notify the client that when the server restarts so it
-# can automatically reconnect.
-# Only usable with udp.
-explicit-exit-notify 1
-
-
# and ; are comments. Read each and every line, you might want to change some stuff (like the logging), specially the first line which is your server public IP.
Now, we need to enable packet forwarding (so we can access the web while connected to the VPN), which can be enabled on the interface level or globally (you can check the different options with sysctl -a | grep forward). I’ll do it globally, run:
-
sysctl net.ipv4.ip_forward=1
-
-
And create/edit the file /etc/sysctl.d/30-ipforward.conf:
-
net.ipv4.ip_forward=1
-
-
Now we need to configure ufw to forward traffic through the VPN. Append the following to /etc/default/ufw (or edit the existing line):
-
...
-DEFAULT_FORWARD_POLICY="ACCEPT"
-...
-
-
And change the /etc/ufw/before.rules, appending the following lines after the header but before the *filter line:
-
...
-# NAT (Network Address Translation) table rules
-*nat
-:POSTROUTING ACCEPT [0:0]
-
-# Allow traffic from clients to the interface
--A POSTROUTING -s 10.8.0.0/24 -o interface -j MASQUERADE
-
-# do not delete the "COMMIT" line or the NAT table rules above will not be processed
-COMMIT
-
-# Don't delete these required lines, otherwise there will be errors
-*filter
-...
-
-
Where interface must be changed depending on your system (in my case it’s ens3, another common one is eth0); I always check this by running ip addr which gives you a list of interfaces (the one containing your server public IP is the one you want, or whatever interface your server uses to connect to the internet):
-
...
-2: ens3: <SOMETHING,SOMETHING> bla bla
- link/ether bla:bla
- altname enp0s3
- inet my.public.ip.addr bla bla
-...
-
-
And also make sure the 10.8.0.0/24 matches the subnet mask specified in the server.conf file (in this example it matches). You should check this very carefully, because I just spent a good 2 hours debugging why my configuration wasn’t working, and this was te reason (I could connect to the VPN, but had no external connection to the web).
-
Finally, allow the OpenVPN port you specified (in this example its 1194/udp) and reload ufw:
You might notice that I didn’t specify how to actually connect the VPN. For that we need a configuration file similar to the server.conf file that we created.
-
The real way of doing this would be to run similar steps as the ones with easy-rsa locally, send them to the server, sign them, and retrieve them. Fuck all that, we’ll just create all configuration files on the server as I was mentioning earlier.
-
Also, the client configuration file has to match the server one (to some degree), to make this easier you can create a client-common file in /etc/openvpn/server with the following content:
-
client
-dev tun
-remote 1.2.3.4 1194 udp # change this to match your ip and port
-resolv-retry infinite
-nobind
-persist-key
-persist-tun
-remote-cert-tls server
-auth SHA512
-verb 3
-
-
Where you should make any changes necessary, depending on your configuration.
-
Now, we need a way to create and revoke new configuration files. For this I created a script, heavily based on one of the links I mentioned at the beginning. You can place these scripts anywhere you like, and you should take a look before running them because you’ll be running them with elevated privileges (sudo).
-
In a nutshell, what it does is: generate a new client certificate keypair, update the CRL and create a new .ovpn configuration file that consists on the client-common data and all of the required certificates; or, revoke an existing client and refresh the CRL. The file is placed under ~/ovpn.
-
Create a new file with the following content (name it whatever you like) and don’t forget to make it executable (chmod +x vpn_script):
-
#!/bin/sh
-# Client ovpn configuration creation and revoking.
-MODE=$1
-if [ ! "$MODE" = "new" -a ! "$MODE" = "rev" ]; then
- echo "$1 is not a valid mode, using default 'new'"
- MODE=new
-fi
-
-CLIENT=${2:-guest}
-if [ -z $2 ];then
- echo "there was no client name passed as second argument, using 'guest' as default"
-fi
-
-# Expiration config.
-EASYRSA_CERT_EXPIRE=3650
-EASYRSA_CRL_DAYS=3650
-
-# Current PWD.
-CPWD=$PWD
-cd /etc/easy-rsa/
-
-if [ "$MODE" = "rev" ]; then
- easyrsa --batch revoke $CLIENT
-
- echo "$CLIENT revoked."
-elif [ "$MODE" = "new" ]; then
- easyrsa build-client-full $CLIENT nopass
-
- # This is what actually generates the config file.
- {
- cat /etc/openvpn/server/client-common
- echo "<ca>"
- cat /etc/easy-rsa/pki/ca.crt
- echo "</ca>"
- echo "<cert>"
- sed -ne '/BEGIN CERTIFICATE/,$ p' /etc/easy-rsa/pki/issued/$CLIENT.crt
- echo "</cert>"
- echo "<key>"
- cat /etc/easy-rsa/pki/private/$CLIENT.key
- echo "</key>"
- echo "<tls-crypt>"
- sed -ne '/BEGIN OpenVPN Static key/,$ p' /etc/openvpn/server/ta.key
- echo "</tls-crypt>"
- } > "$(eval echo ~${SUDO_USER:-$USER}/ovpn/$CLIENT.ovpn)"
-
- eval echo "~${SUDO_USER:-$USER}/ovpn/$CLIENT.ovpn file generated."
-fi
-
-# Finish up, re-generates the crl
-easyrsa gen-crl
-chown nobody:nobody pki/crl.pem
-chmod o+r pki/crl.pem
-cd $CPWD
-
-
And the way to use is to run bash vpn_script <mode> <client_name> where mode is new or rev (revoke) as sudo (when revoking, it doesn’t actually delete the .ovpn file in ~/ovpn). Again, this is a little script that I put together, so you should check it out, it may need tweaks (specially depending on your directory structure for easy-rsa).
-
Now, just get the .ovpn file generated, import it to OpenVPN in your client of preference and you should have a working VPN service.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/website_with_nginx.html b/live/blog/a/website_with_nginx.html
deleted file mode 100644
index 5c50c4f..0000000
--- a/live/blog/a/website_with_nginx.html
+++ /dev/null
@@ -1,284 +0,0 @@
-
-
-
-
-
-
-Set up a website with Nginx and Certbot -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Set up a website with Nginx and Certbot
-
-
These are general notes on how to setup a Nginx web server plus Certbot for SSL certificates, initially learned from Luke’s video and after some use and research I added more stuff to the mix. And, actually at the time of writing this entry, I’m configuring the web server again on a new VPS instance, so this is going to be fresh.
-
As a side note, i use arch btw so everything here es aimed at an Arch Linux distro, and I’m doing everything on a VPS. Also note that most if not all commands here are executed with root privileges.
A domain name (duh!). I got mine on Epik (affiliate link, btw).
-
With the corresponding A and AAA records pointing to the VPS’ IPs. I have three records for each type: empty string, “www” and “*” for a wildcard, that way “domain.name”, “www.domain.name”, “anythingelse.domain.name” point to the same VPS (meaning that you can have several VPS for different sub-domains). These depend on the VPS provider.
-
-
-
A VPS or somewhere else to host it. I’m using Vultr (also an affiliate link, btw).
-
With ssh already configured both on the local machine and on the remote machine.
-
Firewall already configured to allow ports 80 (HTTP) and 443 (HTTPS). I use ufw so it’s just a matter of doing ufw allow 80,443/tcp (for example) as root and you’re golden.
-
cron installed if you follow along (you could use systemd timers, or some other method you prefer to automate running commands every certain time).
Nginx is a web (HTTP) server and reverse proxy server.
-
You have two options: nginx and nginx-mainline. I prefer nginx-mainline because it’s the “up to date” package even though nginx is labeled to be the “stable” version. Install the package and enable/start the service:
And that’s it, at this point you can already look at the default initial page of Nginx if you enter the IP of your server in a web browser. You should see something like this:
-
-
As stated in the welcome page, configuration is needed, head to the directory of Nginx:
-
cd /etc/nginx
-
-
Here you have several files, the important one is nginx.conf, which as its name implies, contains general configuration of the web server. If you peek into the file, you will see that it contains around 120 lines, most of which are commented out and contains the welcome page server block. While you can configure a website in this file, it’s common practice to do it on a separate file (so you can scale really easily if needed for mor websites or sub-domains).
-
Inside the nginx.conf file, delete the server blocks and add the lines include sites-enabled/*; (to look into individual server configuration files) and types_hash_max_size 4096; (to get rid of an ugly warning that will keep appearing) somewhere inside the http block. The final nginx.conf file would look something like (ignoring the comments just for clarity, but you can keep them as side notes):
That could serve as a template if you intend to add more domains.
-
Note some things:
-
-
listen: we’re telling Nginx which port to listen to (IPv4 and IPv6, respectively).
-
root: the root directory of where the website files (.html, .css, .js, etc. files) are located. I followed Luke’s directory path /var/www/some_folder.
-
server_name: the actual domain to “listen” to (for my website it is: server_name luevano.xyz www.luevano.xyz; and for this blog is: server_name blog.luevano.xyz www.blog.luevano.xyz;).
-
index: what file to serve as the index (could be any .html, .htm, .php, etc. file) when just entering the website.
-
location: what goes after domain.name, used in case of different configurations depending on the URL paths (deny access on /private, make a proxy on /proxy, etc).
-
try_files: tells what files to look for.
-
-
-
-
Then, make a symbolic link from this configuration file to the sites-enabled directory:
This is so the nginx.conf file can look up the newly created server configuration. With this method of having each server configuration file separate you can easily “deactivate” any website by just deleting the symbolic link in sites-enabled and you’re good, or just add new configuration files and keep everything nice and tidy.
-
All you have to do now is restart (or enable and start if you haven’t already) the Nginx service (and optionally test the configuration):
-
nginx -t
-systemctl restart nginx
-
-
If everything goes correctly, you can now go to your website by typing domain.name on a web browser. But you will see a “404 Not Found” page like the following (maybe with different Nginx version):
-
-
That’s no problem, because it means that the web server it’s actually working. Just add an index.html file with something simple to see it in action (in the /var/www/some_folder that you decided upon). If you keep seeing the 404 page make sure your root line is correct and that the directory/index file exists.
The only “bad” (bloated) thing about Certbot, is that it uses python, but for me it doesn’t matter too much. You may want to look up another alternative if you prefer. Install the packages certbot and certbot-nginx:
-
pacman -S certbot certbot-nginx
-
-
After that, all you have to do now is run certbot and follow the instructions given by the tool:
-
certbot --nginx
-
-
It will ask you for some information, for you to accept some agreements and the names to activate HTTPS for. Also, you will want to “say yes” to the redirection from HTTP to HTTPS. And that’s it, you can now go to your website and see that you have HTTPS active.
-
Now, the certificate given by certbot expires every 3 months or something like that, so you want to renew this certificate every once in a while. I did this before using cron or manually creating a systemd timer and service, but now it’s just a matter of enabling the certbot-renew.timer:
-
systemctl start certbot-renew.timer
-
-
The deploy-hook is not needed anymore, only for plugins. For more, visit the Arch Linux Wiki.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/a/xmpp_server_with_prosody.html b/live/blog/a/xmpp_server_with_prosody.html
deleted file mode 100644
index 034bc50..0000000
--- a/live/blog/a/xmpp_server_with_prosody.html
+++ /dev/null
@@ -1,665 +0,0 @@
-
-
-
-
-
-
-Set up an XMPP server with Prosody compatible with Conversations and Movim -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Set up an XMPP server with Prosody compatible with Conversations and Movim
-
-
Update: I no longer host this XMPP server as it consumed a lot of resources and I wasn’t using it that much. I’ll probably re-create it in the future, though.
-
Recently I set up an XMPP server (and a Matrix one, too) for my personal use and for friends if they want one; made one for EL ELE EME for example. So, here are the notes on how I set up the server that is compatible with the Conversations app and the Movim social network. You can see my addresses at contact and the XMPP compliance/score of the server.
As with my other entries, this is under a server running Arch Linux, with the Nginx web server and Certbot certificates. And all commands here are executed as root, unless specified otherwise.
Same as with my other entries (website, mail and git) plus:
-
-
A and (optionally) AAA DNS records for:
-
xmpp: the actual XMPP server and the file upload service.
-
muc (or conference): for multi-user chats.
-
pubsub: the publish-subscribe service.
-
proxy: a proxy in case one of the users needs it.
-
vjud: user directory.
-
-
-
(Optionally, but recommended) the following SRV DNS records; make sure it is pointing to an A or AAA record (matching the records from the last point, for example):
-
_xmpp-client._tcp.{your.domain}. for port 5222 pointing to xmpp.{your.domain}.
-
_xmpp-server._tcp.{your.domain}. for port 5269 pointing to xmpp.{your.domain}.
-
_xmpp-server._tcp.muc.{your.domain}. for port 5269 pointing to xmpp.{your.domain}.
-
-
-
SSL certificates for the previous subdomains; similar that with my other entries just create the appropriate prosody.conf (where server_name will be all the subdomains defined above) file and run certbot --nginx. You can find the example configuration file almost at the end of this entry.
-
Email addresses for admin, abuse, contact, security, etc. Or use your own email for all of them, doesn’t really matter much as long as you define them in the configuration and are valid, I have aliases so those emails are forwarded to me.
-
Allow ports 5000, 5222, 5269, 5280 and 5281 for Prosody and, 3478 and 5349 for Turnserver which are the defaults for coturn.
We need mercurial to be able to download and update the extra modules needed to make the server compliant with conversations.im and mov.im. Go to /var/lib/prosody, clone the latest Prosody modules repository and prepare the directories:
-
cd /var/lib/prosody
-hg clone https://hg.prosody.im/prosody-modules modules-available
-mkdir modules-enabled
-
-
You can see that I follow a similar approach that I used with Nginx and the server configuration, where I have all the modules available in a directory, and make a symlink to another to keep track of what is being used. You can update the repository by running hg pull --update while inside the modules-available directory (similar to Git).
And add other modules if needed, but these work for the apps that I mentioned. You should also change the permissions for these files:
-
chown -R prosody:prosody /var/lib/prosody
-
-
Now, configure the server by editing the /etc/prosody/prosody.cfg.lua file. It’s a bit tricky to configure, so here is my configuration file (lines starting with -- are comments). Make sure to change according to your domain, and maybe preferences. Read each line and each comment to know what’s going on, It’s easier to explain it with comments in the file itself than strip it in a lot of pieces.
-
And also, note that the configuration file has a “global” section and a per “virtual server”/”component” section, basically everything above all the VirtualServer/Component sections are global, and bellow each VirtualServer/Component, corresponds to that section.
-
-- important for systemd
-daemonize = true
-pidfile = "/run/prosody/prosody.pid"
-
--- or your account, not that this is an xmpp jid, not email
-admins = { "admin@your.domain" }
-
-contact_info = {
- abuse = { "mailto:abuse@your.domain", "xmpp:abuse@your.domain" };
- admin = { "mailto:admin@your.domain", "xmpp:admin@your.domain" };
- admin = { "mailto:feedback@your.domain", "xmpp:feedback@your.domain" };
- security = { "mailto:security@your.domain" };
- support = { "mailto:support@your.domain", "xmpp:support@muc.your.domain" };
-}
-
--- so prosody look up the plugins we added
-plugin_paths = { "/var/lib/prosody/modules-enabled" }
-
-modules_enabled = {
- -- Generally required
- "roster"; -- Allow users to have a roster. Recommended ;)
- "saslauth"; -- Authentication for clients and servers. Recommended if you want to log in.
- "tls"; -- Add support for secure TLS on c2s/s2s connections
- "dialback"; -- s2s dialback support
- "disco"; -- Service discovery
- -- Not essential, but recommended
- "carbons"; -- Keep multiple clients in sync
- "pep"; -- Enables users to publish their avatar, mood, activity, playing music and more
- "private"; -- Private XML storage (for room bookmarks, etc.)
- "blocklist"; -- Allow users to block communications with other users
- "vcard4"; -- User profiles (stored in PEP)
- "vcard_legacy"; -- Conversion between legacy vCard and PEP Avatar, vcard
- "limits"; -- Enable bandwidth limiting for XMPP connections
- -- Nice to have
- "version"; -- Replies to server version requests
- "uptime"; -- Report how long server has been running
- "time"; -- Let others know the time here on this server
- "ping"; -- Replies to XMPP pings with pongs
- "register"; -- Allow users to register on this server using a client and change passwords
- "mam"; -- Store messages in an archive and allow users to access it
- "csi_simple"; -- Simple Mobile optimizations
- -- Admin interfaces
- "admin_adhoc"; -- Allows administration via an XMPP client that supports ad-hoc commands
- --"admin_telnet"; -- Opens telnet console interface on localhost port 5582
- -- HTTP modules
- "http"; -- Explicitly enable http server.
- "bosh"; -- Enable BOSH clients, aka "Jabber over HTTP"
- "websocket"; -- XMPP over WebSockets
- "http_files"; -- Serve static files from a directory over HTTP
- -- Other specific functionality
- "groups"; -- Shared roster support
- "server_contact_info"; -- Publish contact information for this service
- "announce"; -- Send announcement to all online users
- "welcome"; -- Welcome users who register accounts
- "watchregistrations"; -- Alert admins of registrations
- "motd"; -- Send a message to users when they log in
- --"legacyauth"; -- Legacy authentication. Only used by some old clients and bots.
- --"s2s_bidi"; -- not yet implemented, have to wait for v0.12
- "bookmarks";
- "checkcerts";
- "cloud_notify";
- "csi_battery_saver";
- "default_bookmarks";
- "http_avatar";
- "idlecompat";
- "presence_cache";
- "smacks";
- "strict_https";
- --"pep_vcard_avatar"; -- not compatible with this version of pep, wait for v0.12
- "watchuntrusted";
- "webpresence";
- "external_services";
- }
-
--- only if you want to disable some modules
-modules_disabled = {
- -- "offline"; -- Store offline messages
- -- "c2s"; -- Handle client connections
- -- "s2s"; -- Handle server-to-server connections
- -- "posix"; -- POSIX functionality, sends server to background, enables syslog, etc.
-}
-
-external_services = {
- {
- type = "stun",
- transport = "udp",
- host = "proxy.your.domain",
- port = 3478
- }, {
- type = "turn",
- transport = "udp",
- host = "proxy.your.domain",
- port = 3478,
- -- you could decide this now or come back later when you install coturn
- secret = "YOUR SUPER SECRET TURN PASSWORD"
- }
-}
-
---- general global configuration
-http_ports = { 5280 }
-http_interfaces = { "*", "::" }
-
-https_ports = { 5281 }
-https_interfaces = { "*", "::" }
-
-proxy65_ports = { 5000 }
-proxy65_interfaces = { "*", "::" }
-
-http_default_host = "xmpp.your.domain"
-http_external_url = "https://xmpp.your.domain/"
--- or if you want to have it somewhere else, change this
-https_certificate = "/etc/prosody/certs/xmpp.your.domain.crt"
-
-hsts_header = "max-age=31556952"
-
-cross_domain_bosh = true
---consider_bosh_secure = true
-cross_domain_websocket = true
---consider_websocket_secure = true
-
-trusted_proxies = { "127.0.0.1", "::1", "192.169.1.1" }
-
-pep_max_items = 10000
-
--- this is disabled by default, and I keep it like this, depends on you
---allow_registration = true
-
--- you might want this options as they are
-c2s_require_encryption = true
-s2s_require_encryption = true
-s2s_secure_auth = false
---s2s_insecure_domains = { "insecure.example" }
---s2s_secure_domains = { "jabber.org" }
-
--- where the certificates are stored (/etc/prosody/certs by default)
-certificates = "certs"
-checkcerts_notify = 7 -- ( in days )
-
--- rate limits on connections to the server, these are my personal settings, because by default they were limited to something like 30kb/s
-limits = {
- c2s = {
- rate = "2000kb/s";
- };
- s2sin = {
- rate = "5000kb/s";
- };
- s2sout = {
- rate = "5000kb/s";
- };
-}
-
--- again, this could be yourself, it is a jid
-unlimited_jids = { "admin@your.domain" }
-
-authentication = "internal_hashed"
-
--- if you don't want to use sql, change it to internal and comment the second line
--- since this is optional, i won't describe how to setup mysql or setup the user/database, that would be out of the scope for this entry
-storage = "sql"
-sql = { driver = "MySQL", database = "prosody", username = "prosody", password = "PROSODY USER SECRET PASSWORD", host = "localhost" }
-
-archive_expires_after = "4w" -- configure message archive
-max_archive_query_results = 20;
-mam_smart_enable = true
-default_archive_policy = "roster" -- archive only messages from users who are in your roster
-
--- normally you would like at least one log file of certain level, but I keep all of them, the default is only the info = "*syslog" one
-log = {
- info = "*syslog";
- warn = "prosody.warn";
- error = "prosody.err";
- debug = "prosody.debug";
- -- "*console"; -- Needs daemonize=false
-}
-
--- cloud_notify
-push_notification_with_body = false -- Whether or not to send the message body to remote pubsub node
-push_notification_with_sender = false -- Whether or not to send the message sender to remote pubsub node
-push_max_errors = 5 -- persistent push errors are tolerated before notifications for the identifier in question are disabled
-push_max_devices = 5 -- number of allowed devices per user
-
--- by default every user on this server will join these muc rooms
-default_bookmarks = {
- { jid = "room@muc.your.domain", name = "The Room" };
- { jid = "support@muc.your.domain", name = "Support Room" };
-}
-
--- could be your jid
-untrusted_fail_watchers = { "admin@your.domain" }
-untrusted_fail_notification = "Establishing a secure connection from $from_host to $to_host failed. Certificate hash: $sha1. $errors"
-
------------ Virtual hosts -----------
-VirtualHost "your.domain"
- name = "Prosody"
- http_host = "xmpp.your.domain"
-
-disco_items = {
- { "your.domain", "Prosody" };
- { "muc.your.domain", "MUC Service" };
- { "pubsub.your.domain", "Pubsub Service" };
- { "proxy.your.domain", "SOCKS5 Bytestreams Service" };
- { "vjud.your.domain", "User Directory" };
-}
-
-
--- Multi-user chat
-Component "muc.your.domain" "muc"
- name = "MUC Service"
- modules_enabled = {
- --"bob"; -- not compatible with this version of Prosody
- "muc_limits";
- "muc_mam"; -- message archive in muc, again, a placeholder
- "muc_mam_hints";
- "muc_mention_notifications";
- "vcard_muc";
- }
-
- restrict_room_creation = false
-
- muc_log_by_default = true
- muc_log_presences = false
- log_all_rooms = false
- muc_log_expires_after = "1w"
- muc_log_cleanup_interval = 4 * 60 * 60
-
-
--- Upload
-Component "xmpp.your.domain" "http_upload"
- name = "Upload Service"
- http_host= "xmpp.your.domain"
- -- you might want to change this, these are numbers in bytes, so 10MB and 100MB respectively
- http_upload_file_size_limit = 1024*1024*10
- http_upload_quota = 1024*1024*100
-
-
--- Pubsub
-Component "pubsub.your.domain" "pubsub"
- name = "Pubsub Service"
- pubsub_max_items = 10000
- modules_enabled = {
- "pubsub_feeds";
- "pubsub_text_interface";
- }
-
- -- personally i don't have any feeds configured
- feeds = {
- -- The part before = is used as PubSub node
- --planet_jabber = "http://planet.jabber.org/atom.xml";
- --prosody_blog = "http://blog.prosody.im/feed/atom.xml";
- }
-
-
--- Proxy
-Component "proxy.your.domain" "proxy65"
- name = "SOCKS5 Bytestreams Service"
- proxy65_address = "proxy.your.domain"
-
-
--- Vjud, user directory
-Component "vjud.your.domain" "vjud"
- name = "User Directory"
- vjud_mode = "opt-in"
-
-
You HAVE to read all of the configuration file, because there are a lot of things that you need to change to make it work with your server/domain. Test the configuration file with:
-
luac5.2 -p /etc/prosody/prosody.cfg.lua
-
-
Notice that by default prosody will look up certificates that look like sub.your.domain, but if you get the certificates like I do, you’ll have a single certificate for all subdomains, and by default it is in /etc/letsencrypt/live, which has some strict permissions. So, to import it you can run:
Ignore the complaining about not finding the subdomain certificates and note that you will have to run that command on each certificate renewal, to automate this, add the --deploy-hook flag to your automated Certbot renewal system; for me it’s a systemd timer with the following certbot.service:
That’s basically all the configuration that needs Prosody itself, but we still have to configure Nginx and Coturn before starting/enabling the prosody service.
Since this is not an ordinary configuration file I’m going to describe this, too. Your prosody.conf file should have the following location blocks under the main server block (the one that listens to HTTPS):
And you will need the following host-meta and host-meta.json files inside the .well-known/acme-challenge directory for your.domain (following my nomenclature: /var/www/yourdomaindir/.well-known/acme-challenge/).
Remember to have your prosody.conf file symlinked (or discoverable by Nginx) to the sites-enabled directory. You can now test and restart your nginx service (and test the configuration, optionally):
Coturn is the implementation of TURN and STUN server, which in general is for (at least in the XMPP world) voice support and external service discovery.
-
Install the coturn package:
-
pacman -S coturn
-
-
You can modify the configuration file (located at /etc/turnserver/turnserver.conf) as desired, but at least you need to make the following changes (uncomment or edit):
-
use-auth-secret
-realm=proxy.your.domain
-static-auth-secret=YOUR SUPER SECRET TURN PASSWORD
-
-
I’m sure there is more configuration to be made, like using SQL to store data and whatnot, but for now this is enough for me. Note that you may not have some functionality that’s needed to create dynamic users to use the TURN server, and to be honest I haven’t tested this since I don’t use this feature in my XMPP clients, but if it doesn’t work, or you know of an error or missing configuration don’t hesitate to contact me.
And you can add your first user with the prosodyctl command (it will prompt you to add a password):
-
prosodyctl adduser user@your.domain
-
-
You may want to add a compliance user, so you can check if your server is set up correctly. To do so, go to XMPP Compliance Tester and enter the compliance user credentials. It should have similar compliance score to mine:
-
-
Additionally, you can test the security of your server in IM Observatory, here you only need to specify your domain.name (not xmpp.domain.name, if you set up the SRV DNS records correctly). Again, it should have a similar score to mine:
-
-
You can now log in into your XMPP client of choice, if it asks for the server it should be xmpp.your.domain (or your.domain for some clients) and your login credentials you@your.domain and the password you chose (which you can change in most clients).
-
That’s it, send me a message at david@luevano.xyz if you were able to set up the server successfully.
I just have a bit of experience with Godot and with gamedev in general, so I started with this game as it is pretty straight forward. On a high level the main characteristics of the game are:
-
-
Literally just one sprite going up and down.
-
Constant horizontal move of the world/player.
-
If you go through the gap in the pipes you score a point.
-
If you touch the pipes, the ground or go past the “ceiling” you lose.
-
-
The game was originally developed with Godot 4.0 alpha 8, but it didn’t support HTML5 (webassembly) export… so I backported to Godot 3.5 rc1.
-
Note: I’ve updated the game to Godot 4 and documented it on my FlappyBird devlog 2 entry.
-
Not going to specify all the details, only the needed parts and what could be confusing, as the source code is available and can be inspected; also this assumes minimal knowledge of Godot in general. Usually when I mention that a set/change of something it usually it’s a property and it can be found under the Inspector on the relevant node, unless stated otherwise; also, all scripts attached have the same name as the scenes, but in snake_case (scenes/nodes in PascalCase).
-
One thing to note, is that I started writing this when I finished the game, so it’s hard to go part by part, and it will be hard to test individual parts when going through this as everything is depending on each other. For the next devlog, I’ll do it as I go and it will include all the changes to the nodes/scripts as I was finding them, probably better idea and easier to follow.
-
The source code can be found at luevano/flappybirdgodot#godot-3.5 (godot-3.5 branch), it also contains the exported versions for HTML5, Windows and Linux (be aware that the sound might be too high and I’m too lazy to make it configurable, it was the last thing I added on the latest version this is fixed and audio level is configurable now). Playable on itch.io (Godot 4 version):
Since this is just pixel art, the importing settings for textures needs to be adjusted so the sprites don’t look blurry. Go to Project -> Project settings… -> Import defaults and on the drop down select Texture, untick everything and make sure Compress/Mode is set to Lossless.
It’s also a good idea to setup some config variables project-wide. To do so, go to Project -> Project settings… -> General, select Application/config and add a new property (there is a text box at the top of the project settings window) for game scale: application/config/game_scale for the type use float and then click on add; configure the new property to 3.0; On the same window, also add application/config/version as a string, and make it 1.0.0 (or whatever number you want).
-
-
For my personal preferences, also disable some of the GDScript debug warnings that are annoying, this is done at Project -> Project settings… -> General, select Debug/GDScript and toggle off Unused arguments, Unused signal and Return value discarded, and any other that might come up too often and don’t want to see.
-
-
Finally, set the initial window size in Project -> Project settings… -> General, select Display/Window and set Size/Width and Size/Height to 600 and 800, respectively. As well as the Stretch/Mode to viewport , and Stretch/Aspect to keep:
I only used 3 actions (keybindings): jump, restart and toggle_debug (optional). To add custom keybindings (so that the Input.something() API can be used), go to Project -> Project settings… -> Input Map and on the text box write jump and click add, then it will be added to the list and it’s just a matter of clicking the + sign to add a Physical key, press any key you want to be used to jump and click ok. Do the same for the rest of the actions.
Finally, rename the physics layers so we don’t lose track of which layer is which. Go to Project -> Layer Names -> 2d Physics and change the first 5 layer names to (in order): player, ground, pipe, ceiling and score.
For the assets I found out about a pack that contains just what I need: flappy-bird-assets by MegaCrash; I just did some minor modifications on the naming of the files. For the font I used Silver, and for the sound the resources from FlappyBird-N64 (which seems to be taken from 101soundboards.com which the orignal copyright holder is .Gears anyways).
Create the necessary directories to hold the respective assets and it’s just a matter of dragging and dropping, I used directories: res://entities/actors/player/sprites/, res://fonts/, res://levels/world/background/sprites/, res://levels/world/ground/sprites/, res://levels/world/pipe/sprites/, res://sfx/. For the player sprites, the
-FileSystem window looks like this (entities/actor directories are really not necessary):
-
-
It should look similar for other directories, except maybe for the file extensions. For example, for the sfx:
Now it’s time to actually create the game, by creating the basic scenes that will make up the game. The hardest part and the most confusing is going to be the TileMaps, so that goes first.
I’m using a scene called WorldTiles with a Node2D node as root called the same. With 2 different TileMap nodes as children named GroundTileMap and PipeTileMap (these are their own scene); yes 2 different TileMaps because we need 2 different physics colliders (in Godot 4.0 you can have a single TileMap with different physics colliders in it). Each node has its own script. It should look something like this:
-
-
I used the following directory structure:
-
-
To configure the GroundTileMap, select the node and click on (empty) on the TileMap/Tile set property and then click on New TileSet, then click where the (empty) used to be, a new window should open on the bottom:
-
-
Click on the plus on the bottom left and you can now select the specific tile set to use. Now click on the yellow + New Single Tile, activate the grid and select any of the tiles. Should look like this:
-
-
We need to do this because for some reason we can’t change the snap options before selecting a tile. After selecting a random tile, set up the Snap Options/Step (in the Inspector) and set it to 16x16 (or if using a different tile set, to it’s tile size):
-
-
Now you can select the actual single tile. Once selected click on Collision, use the rectangle tool and draw the rectangle corresponding to that tile’s collision:
-
-
Do the same for the other 3 tiles. If you select the TileMap itself again, it should look like this on the right (on default layout it’s on the left of the Inspector):
-
-
The ordering is important only for the “underground tile”, which is the filler ground, it should be at the end (index 3); if this is not the case, repeat the process (it’s possible to rearrange them but it’s hard to explain as it’s pretty weird).
-
At this point the tilemap doesn’t have any physics and the cell size is wrong. Select the GroundTileMap, set the TileMap/Cell/Size to 16x16, the TileMap/Collision/Layer set to bit 2 only (ground layer) and disable any TileMap/Collision/Mask bits. Should look something like this:
-
-
Now it’s just a matter of repeating the same for the pipes (PipeTileMap), only difference is that when selecting the tiles you need to select 2 tiles, as the pipe is 2 tiles wide, or just set the Snap Options/Step to 32x16, for example, just keep the cell size to 16x16.
I added few default ground tiles to the scene, just for testing purposes but I left them there. These could be place programatically, but I was too lazy to change things. On the WorldTiles scene, while selecting the GroundTileMap, you can select the tiles you want to paint with, and left click in the grid to paint with the selected tile. Need to place tiles from (-8, 7) to (10, 7) as well as the tile below with the filler ground (the tile position/coordinates show at the bottom left, refer to the image below):
On a new scene called Player with a KinematicBody2D node named Player as the root of the scene, then for the children: AnimatedSprite as Sprite, CollisionShape2D as Collision (with a circle shape) and 3 AudioStreamPlayers for JumpSound, DeadSound and HitSound. Not sure if it’s a good practice to have the audio here, since I did that at the end, pretty lazy. Then, attach a script to the Player node and then it should look like this:
-
-
Select the Player node and set the CollisionShape2D/Collision/Layer to 1 and the CollisionObject2D/Collision/Mask to 2 and 3 (ground and pipe).
-
For the Sprite node, when selecting it click on the (empty) for the AnimatedSprite/Frames property and click New SpriteFrames, click again where the (empty) used to be and ane window should open on the bottom:
-
-
Right off the bat, set the Speed to 10 FPS (bottom left) and rename default to bird_1. With the bird_1 selected, click on the Add frames from a Sprite Sheet, which is the second button under Animation Frames: which looks has an icon of a small grid (next to the folder icon), a new window will popup where you need to select the respective sprite sheet to use and configure it for importing. On the Select Frames window, change the Vertical to 1, and then select all 4 frames (Ctrl + Scroll wheel to zoom in):
-
-
After that, the SpriteFrames window should look like this:
-
-
Finally, make sure the Sprite node has the AnimatedSprite/Animation is set to bird_1 and that the Collision node is configured correctly for its size and position (I just have it as a radius of 7). As well as dropping the SFX files into the corresponding AudioStreamPlayer (into the AudioStreamPlayer/Stream property).
These are really simple scenes that don’t require much setup:
-
-
CeilingDetector: just an Area2D node with a CollisionShape2D in the form of a rectangle (CollisionShape2D/Shape/extents to (120, 10)), stretched horizontally so it fits the whole screen. CollisionObject2D/Collision/Layer set to bit 4 (ceiling) and CollisionObject2D/Collision/Mask set to bit 1 (player).
-
ScoreDetector: similar to the CeilingDetector, but vertical (CollisionShape2D/Shape/extents to (2.5, 128)) and CollisionObject2D/Collision/Layer set to bit 1 (player).
-
WorldDetector: Node2D with a script attached, and 3 RayCast2D as children:
-
NewTile: Raycast2D/Enabled to true (checked), Raycast2D/Cast To(0, 400), Raycast2D/Collision Mask to bit 2 (ground) and Node2D/Transform/Position to (152, -200)
-
OldTile: same as “NewTile”, except for the Node2D/Transform/Position, set it to (-152, -200).
-
OldPipe: same as “OldTile”, except for the Raycast2D/Collision Mask, set it to bit 3 (pipe).
This is the actual Game scene that holds all the playable stuff, here we will drop in all the previous scenes; the root node is a Node2D and also has an attached script. Also need to add 2 additional AudioStreamPlayers for the “start” and “score” sounds, as well as a Sprite for the background (Sprite/Offset/Offset set to (0, 10)) and a Camera2D (Camera2D/Current set to true (checked)). It should look something like this:
-
-
The scene viewport should look something like the following:
We need some font Resources to style the Label fonts. Under the FileSystem window, right click on the fonts directory (create one if needed) and click on New Resource... and select DynamicFontData, save it in the “fonts” directory as SilverDynamicFontData.tres (Silver as it is the font I’m using) then double click the just created resource and set the DynamicFontData/Font Path to the actual Silver.ttf font (or whatever you want).
-
Then create a new resource and this time select DynamicFont, name it SilverDynamicFont.tres, then double click to edit and add the SilverDynamicFontData.tres to the DynamicFont/Font/Font Data property (and I personally toggled off the DynamicFont/Font/Antialiased property), now just set the DynamicFont/Settings/(Size, Outline Size, Outline Color) to 32, 1 and black, respectively (or any other values you want). It should look something like this:
-
-
Do the same for another DynamicFont which will be used for the score label, named SilverScoreDynamicFont.tres. Only changes are Dynamic/Settings/(Size, Outline Size) which are set to 128 and 2, respectively. The final files for the fonts should look something like this:
This has a bunch of nested nodes, so I’ll try to be concise here. The root node is a CanvasLayer named UI with its own script attached, and for the children:
-
-
MarginContainer: MarginContainer with Control/Margin/(Left, Top) set to 10 and Control/Margin/(Right, Bottom) set to -10.
-
InfoContainer: VBoxContainer with Control/Theme Overrides/Constants/Separation set to 250.
-
ScoreContainer: VBoxContainer.
-
Score: Label with Label/Align set to Center, Control/Theme Overrides/Fonts/Font to the SilverScoreDynamicFont.tres, if needed adjust the DynamicFont settings.
-
HighScore: same as Score, escept for the Control/Theme Overrides/Fonts/Font which is set to SilverDynamicFont.tres.
-
-
-
StartGame: Same as HighScore.
-
-
-
DebugContainer: VBoxContainer.
-
FPS: Label.
-
-
-
VersionContainer: VBoxContainer with BoxContainer/Alignment set to Begin.
This is the final scene where we connect the Game and the UI. It’s made of a Node2D with it’s own script attached and an instance of Game and UI as it’s children.
-
This is a good time to set the default scene when we run the game by going to Project -> Project settings… -> General and in Application/Run set the Main Scene to the Main.tscn scene.
I’m going to keep this scripting part to the most basic code blocks, as it’s too much code, for a complete view you can head to the source code.
-
As of now, the game itself doesn’t do anything if we hit play. The first thing to do so we have something going on is to do the minimal player scripting.
The most basic code needed so the bird goes up and down is to just detect jump key presses and add a negative jump velocity so it goes up (y coordinate is reversed in godot…), we also check the velocity sign of the y coordinate to decide if the animation is playing or not.
You can play it now and you should be able to jump up and down, and the bird should stop on the ground (although you can keep jumping). One thing to notice is that when doing sprite.stop() it stays on the last frame, we can fix that using the code below (and then change sprite.stop() for _stop_sprite()):
-
func _stop_sprite() -> void:
- if sprite.playing:
- sprite.stop()
- if sprite.frame != 0:
- sprite.frame = 0
-
-
Where we just check that the last frame has to be the frame 0.
-
Now just a matter of adding other needed code for moving horizontally, add sound by getting a reference to the AudioStreamPlayers and doing sound.play() when needed, as well as handling death scenarios by adding a signal died at the beginning of the script and handle any type of death scenario using the below function:
-
func _emit_player_died() -> void:
- # bit 2 corresponds to pipe (starts from 0)
- set_collision_mask_bit(2, false)
- dead = true
- SPEED = 0.0
- emit_signal("died")
- # play the sounds after, because yield will take a bit of time,
- # this way the camera stops when the player "dies"
- velocity.y = -DEATH_JUMP_VELOCITY
- velocity = move_and_slide(velocity)
- hit_sound.play()
- yield(hit_sound, "finished")
- dead_sound.play()
-
-
Finally need to add the actual checks for when the player dies (like collision with ground or pipe) as well as a function that listens to a signal for when the player goes to the ceiling.
The code is pretty simple, we just need a way of detecting if we ran out of ground and send a signal, as well as sending as signal when we start detecting ground/pipes behind us (to remove it) because the world is being generated as we move. The most basic functions needed are:
We need to keep track of 3 “flags”: ground_was_colliding, ground_now_colliding and pipe_now_colliding (and their respective signals), which are going to be used to do the checks inside _physics_process. For example for checking for new ground: ground_now_colliding = _now_colliding(old_ground, ground_now_colliding, "ground_started_colliding").
This script is what handles the GroundTileMap as well as the PipeTileMap and just basically functions as a “Signal bus” connecting a bunch of signals from the WorldDetector with the TileMaps and just tracking how many pipes have been placed:
This is the node that actually places the ground tiles upong receiving a signal. In general, what you want is to keep track of the newest tile that you need to place (empty spot) as well as the last tile that is in the tilemap (technically the first one if you count from left to right). I was experimenting with enums so I used them to define the possible Ground tiles:
This way you can just select the tile by doing Ground.TILE_1, which will correspond to the int value of 0. So most of the code is just:
-
# old_tile is the actual first tile, whereas the new_tile_position
-# is the the next empty tile; these also correspond to the top tile
-const _ground_level: int = 7
-const _initial_old_tile_x: int = -8
-const _initial_new_tile_x: int = 11
-var old_tile_position: Vector2 = Vector2(_initial_old_tile_x, _ground_level)
-var new_tile_position: Vector2 = Vector2(_initial_new_tile_x, _ground_level)
-
-
-func _place_new_ground() -> void:
- set_cellv(new_tile_position, _get_random_ground())
- set_cellv(new_tile_position + Vector2.DOWN, Ground.TILE_DOWN_1)
- new_tile_position += Vector2.RIGHT
-
-
-func _remove_first_ground() -> void:
- set_cellv(old_tile_position, -1)
- set_cellv(old_tile_position + Vector2.DOWN, -1)
- old_tile_position += Vector2.RIGHT
-
-
Where you might notice that the _initial_new_tile_x is 11, instead of 10, refer to Default ground tiles where we placed tiles from -8 to 10, so the next empty one is 11. These _place_new_ground and _remove_first_ground functions are called upon receiving the signal.
This is really similar to the GroundTileMap code, instead of defining an enum for the ground tiles, we define it for the pipe patterns (because each pipe is composed of multiple pipe tiles). If your pipe tile set looks like this (notice the index):
Now, the pipe system requires a bit more of tracking as we need to instantiate a ScoreDetector here, too. I ended up keeping track of the placed pipes/detectors by using a “pipe stack” (and “detector stack”) which is just an array of placed objects from which I pop the first when deleting them:
-
onready var _pipe_sep: int = get_parent().PIPE_SEP
-const _pipe_size: int = 16
-const _ground_level: int = 7
-const _pipe_level_y: int = _ground_level - 1
-const _initial_new_pipe_x: int = 11
-var new_pipe_starting_position: Vector2 = Vector2(_initial_new_pipe_x, _pipe_level_y)
-var pipe_stack: Array
-
-# don't specify type for game, as it results in cyclic dependency,
-# as stated here: https://godotengine.org/qa/39973/cyclic-dependency-error-between-actor-and-actor-controller
-onready var game = get_parent().get_parent()
-var detector_scene: PackedScene = preload("res://levels/detectors/score_detector/ScoreDetector.tscn")
-var detector_offset: Vector2 = Vector2(16.0, -(_pipe_size / 2.0) * 16.0)
-var detector_stack: Array
-
-
The detector_offset is just me being picky. For placing a new pipe, we get the starting position (bottom pipe tile) and build upwards, then instantiate a new ScoreDetector (detector_scene) and set it’s position to the pipe starting position plus the offset, so it’s centered in the pipe, then just need to connect the body_entered signal from the detector with the game, so we keep track of the scoring. Finally just add the placed pipe and detector to their corresponding stacks:
For removing pipes, it’s really similar but instead of getting the position from the next tile, we pop the first element from the (pipe/detector) stack and work with that. To remove the cells we just set the index to -1:
-
func _remove_old_pipe() -> void:
- var current_pipe: Vector2 = pipe_stack.pop_front()
- var c: int = 0
- while c < _pipe_size:
- set_cellv(current_pipe, -1)
- current_pipe += Vector2.UP
- c += 1
-
- var detector: Area2D = detector_stack.pop_front()
- remove_child(detector)
- detector.queue_free()
-
-
These functions are called when receiving the signal to place/remove pipes.
Before proceeding, we require a way to save/load data (for the high scores). We’re going to use the ConfigFile node that uses a custom version of the ini file format. Need to define where to save the data:
Then, whenever this script is loaded we load the data and if it’s a new file, then add the default high score of 0:
-
func _ready() -> void:
- _load_data()
-
- if not _data.has_section(SCORE_SECTION):
- set_new_high_score(0)
- save_data()
-
-
Now, this script in particular will need to be a Singleton (AutoLoad), which means that there will be only one instance and will be available across all scripts. To do so, go to Project -> Project settings… -> AutoLoad and select this script in the Path: and add a Node Name: (I used SavedData, if you use something else, be careful while following this devlog) which will be the name we’ll use to access the singleton. Toggle on Enable if needed, it should look like this:
The game script it’s also like a “Signal bus” in the sense that it connects all its childs’ signals together, and also has the job of starting/stopping the _process and _physics_process methods from the childs as needed. First, we need to define the signals and and references to all child nodes:
-
signal game_started
-signal game_over
-signal new_score(score, high_score)
-
-onready var player: Player = $Player
-onready var background: Sprite= $Background
-onready var world_tiles: WorldTiles = $WorldTiles
-onready var ceiling_detector: Area2D = $CeilingDetector
-onready var world_detector: Node2D = $WorldDetector
-onready var camera: Camera2D = $Camera
-onready var start_sound: AudioStreamPlayer = $StartSound
-onready var score_sound: AudioStreamPlayer = $ScoreSound
-
-
It’s important to get the actual “player speed”, as we’re using a scale to make the game look bigger (remember, pixel art), to do so we need a reference to the game_scale we setup at the beginning and compute the player_speed:
-
var _game_scale: float = ProjectSettings.get_setting("application/config/game_scale")
-var player_speed: float
-
-
-func _ready() -> void:
- scale = Vector2(_game_scale, _game_scale)
- # so we move at the actual speed of the player
- player_speed = player.SPEED / _game_scale
-
-
This player_speed will be needed as we need to move all the nodes (Background, Camera, etc.) in the x axis as the player is moving. This is done in the _physics_process:
Where the player is a special case, as when the player dies, it should still move (only down), else it would just freeze in place. In _ready we connect all the necessary signals as well as initially set the processing to false using the last function. To start/restart the game we need to keep a flag called is_game_running initially set to false and then handle the (re)startability in _input:
-
func _input(event: InputEvent) -> void:
- if not is_game_running and event.is_action_pressed("jump"):
- _set_processing_to(true)
- is_game_running = true
- emit_signal("game_started")
- start_sound.play()
-
- if event.is_action_pressed("restart"):
- get_tree().reload_current_scene()
-
When the player dies, we set all processing to false, except for the player itself (so it can drop all the way to the ground). Also, when receiving a “scoring” signal, we manage the current score, as well as saving the new high score when applicable, note that we need to read the high_score at the beginning by calling SavedData.get_high_score(). This signal we emit will be received by the UI so it updates accordingly.
First thing is to get a reference to all the child Labels, an initial reference to the high score as well as the version defined in the project settings:
-
onready var fps_label: Label = $MarginContainer/DebugContainer/FPS
-onready var version_label: Label = $MarginContainer/VersionContainer/Version
-onready var score_label: Label = $MarginContainer/InfoContainer/ScoreContainer/Score
-onready var high_score_label: Label = $MarginContainer/InfoContainer/ScoreContainer/HighScore
-onready var start_game_label: Label = $MarginContainer/InfoContainer/StartGame
-
-onready var _initial_high_score: int = SavedData.get_high_score()
-
-var _version: String = ProjectSettings.get_setting("application/config/version")
-
-
Then set the initial Label values as well as making the fps_label invisible:
At this point the game should be fully playable (if any detail missing feel free to look into the source code linked at the beginning). Only thing missing is an icon for the game; I did one pretty quicly with the assets I had.
If you followed the directory structure I used, then only thing needed is to transform the icon to a native Windows ico format (if exporting to Windows, else ignore this part). For this you need ImageMagick or some other program that can transform png (or whatever file format you used for the icon) to ico. I used [Chocolatey][https://chocolatey.org/] to install imagemagick, then to convert the icon itself used: magick convert icon.png -define icon:auto-resize=256,128,64,48,32,16 icon.ico as detailed in Godot‘s Changing application icon for Windows.
You need to download the templates for exporting as detailed in Godot‘s Exporting projects. Basically you go to Editor -> Manage Export Templates… and download the latest one specific to your Godot version by clicking on Download and Install.
-
If exporting for Windows then you also need to download rcedit from here. Just place it wherever you want (I put it next to the Godot executable).
-
Then go to Project -> Export… and the Window should be empty, add a new template by clicking on Add... at the top and then select the template you want. I used HTML5, Windows Desktop and Linux/X11. Really the only thing you need to set is the “Export Path” for each template, which is te location of where the executable will be written to, and in the case of the Windows Desktop template you could also setup stuff like Company Name, Product Name, File/Product Version, etc..
-
Once the templates are setup, select any and click on Export Project at the bottom, and make sure to untoggle Export With Debug in the window that pops up, this checkbox should be at the bottom of the new window.
Porting the FlappyBird clone to Godot 4.1 devlog 2
-
-
As stated in my FlappyBird devlog 1 entry I originally started the clone in Godot 4, then backported back to Godot 3 because of HTML5 support, and now I’m porting it back again to Godot 4 as there is support again and I want to start getting familiar with it for future projects.
Disclaimer: I started the port back in Godot 4.0 something and left the project for a while, then opened the project again in Godot 4.1, and it didn’t ask to convert anything so probably nowadays the conversion is better. Godot’s documentation is pretty useful, I looked at the GDScript reference and GDScript exports and that helped a lot.
Now that the game at least runs, next thing is to make it “playable”:
-
-
AnimatedSprite changed to AnimatedSprite2D (with the inclusion of AnimatedSprite3D). This node type changed with the automatic conversion.
-
Instead of checking if an animation is playing with the the playing property, the method is_playing() needs to be used.
-
-
-
The default_gravity from the ProjectSettings no longer needs to be multiplied by 10 to have reasonable numbers. The default is now 980 instead of 98. I later changed this when refactoring the code and fine-tuning the feel of the movement.
-
The Collision mask can be changed programatically with the set_collision_mask_value (and similar with the layer). Before, the mask/layer was specified by the bit which started from 0, but now it is accessed by the layer_number which starts from 1.
This is the most challenging part as the TileMap system changed drastically, it is basically a from the ground up redesign, luckily the TileMaps I use are really simple. Since this is not intuitive from the get-go, I took some notes on the steps I took to set up the world TileMap.
Instead of using one scene per TileMap only one TileMap can be used with multiple Atlas in the TileSet. Multiple physics layers can now be used per TileSet so you can separate the physics collisions on a per Atlas or Tile basis. The inclusion of Tile patterns also helps when working with multiple Tiles for a single cell “placement”. How I did it:
-
-
Created one scene with one TileMap node, called WorldTileMap.tscn, with only one TileSet as multiple Atlas‘ can be used (this would be a single TileSet in Godot 3).
-
To add a TileSet, select the WorldTileMap and go to Inspector -> TileMap -> TileSet then click on “” and then “New TileSet” button.
-
To manipulate a TileSet, it needs to be selected, either by clicking in the Inspector section or on the bottom of the screen (by default) to the left of TileMap, as shown in the image below.
-
-
-
-
-
-
Add two Atlas to the TileSet (one for the ground tiles and another for the pipes) by clicking on the “Add” button (as shown in the image above) and then on “Atlas”.
-
By selecting an atlas and having the “Setup” selected, change the Name to something recognizable like ground and add the texture atlas (the spritesheet) by dragging and dropping in the “” Texture field, as shown in the image below. Take a not of the ID, they start from 0 and increment for each atlas, but if they’re not 0 and 1 change them.
-
-
-
-
I also like to delete unnecessary tiles (for now) by selecting the atlas “Setup” and the “Eraser” tool, as shown in the image below. Then to erase tiles just select them and they’ll be highlighted in black, once deleted they will be grayed out. If you want to activate tiles again just deselect the “Eraser” tool and select wanted tiles.
-
-
-
-
For the pipes it is a good idea to modify the “tile width” for horizontal 1x2 tiles. This can be acomplished by removing all tiles except for one, then going to the “Select” section of the atlas, selecting a tile and extending it either graphically by using the yellow circles or by using the properties, as shown in the image below.
-
-
-
-
Add physics (collisions) by selecting the WorldTileMap‘s TileSet and clicking on “Add Element” at the TileMap -> TileSet -> Physics Layer twice, one physics layer per atlas. Then set the collision’s layers and masks accordingly (ground on layer 2, pipe on 3). In my case, based on my already set layers.
-
This will enable physics properties on the tiles when selecting them (by selecting the atlas, being in the correct “Select” section and selecting a tile) and start drawing a polygon with the tools provided. This part is hard to explain in text, but below is an image of how it looks once the polygon is set.
-
-
-
-
-
- Notice that the polygon is drawn in *Physics Layer 0*. Using the grid option to either `1` or `2` is useful when drawing the polygon, make sure the polygon closes itself or it wont be drawn.
-
-
-
Create a tile pattern by drawing the tiles wanted in the editor and then going to the Patterns tab (to the right of Tiles) in the TileMap, selecting all tiles wanted in the pattern and dragging the tiles to the Patterns window. Added patterns will show in this window as shown in the image below, and assigned with IDs starting from 0.
Basically merged all 3 scripts (ground_tile_map.gd, pipe_tile_map.gd and world_tiles.gd) into one (world_tile_map.gd) and immediatly was able to delete a lot of signal calls between those 3 scripts and redundant code.
-
The biggest change in the scripting side are the functions to place tiles. For Godot 3:
# place single tile in specific cell
-void set_cell(layer: int, coords: Vector2i, source_id: int = -1, atlas_coords: Vector2i = Vector2i(-1, -1), alternative_tile: int = 0)
-# erase tile at specific cell
-void erase_cell(layer: int, coords: Vector2i)
-
-
How to use these functions in Godot 4 (new properties or differences/changes):
-
-
layer: for my case I only use 1 layer so it is always set to 0.
-
coords: would be the equivalent to position for set_cellv in Godot 3.
-
source_id: which atlas to use (ground: 0 or pipe 1).
-
atlas_coords: tile to use in the atlas. This would be the equivalent to tile in Godot 3.
-
alternative_tile: for tiles that have alternatives such as mirrored or rotated tiles, not required in my case.
-
-
Setting source_id=-1, atlas_coords=Vector21(-1,-1) or alternative_tile=-1 will delete the tile at coords, similar to just using erase_cell.
-
With the addition to Tile patterns (to place multiple tiles), there is a new function:
Where position has the same meaning as coords in set_cell/erase_cell, not sure why it has a different name. The pattern can be obtained by using get_pattern method on the tile_set property of the TileMap. Something like:
-
var pattern: TileMapPattern = tile_set.get_pattern(index)
-
-
Other than that, Vector2 needs to be changed to Vector2i.
The audio in the Godot 3 version was added in the last minute and it was blasting by default with no option to decrease the volume or mute it. To deal with this:
-
-
Refactored the code into a single scene/script to have better control.
Moved all the signal logic into an event bus to get rid of the coupling I had. This is accomplished by:
-
-
Creating a singleton (autoload) script which I called event.gd and can be accessed with Event.
-
All the signals are now defined in event.gd.
-
When a signal needs to be emited instead of emitting the signal from any particular script, emit it from the event bus with Event.<signal_name>.emit(<optional_args>).
-
When connecting to a signal instead of taking a reference to where the signal is defined, simply connect it with with Event.<signal_name>.connect(<callable>[.bind(<optional_args>)])
-
For signals that already send arguments to the callable, they do not need to be specified in bind, only extras are needed here.
Really the only UI I had before was for rendering fonts, and the way fonts work changed a bit. Before, 3 resources were needed as noted in my previous entry:
-
-
Font file itself (.ttf for example).
-
DynamicFontData: used to point to a font file (.ttf) and then used as base resource.
-
DynamicFont: usable in godot control nodes which holds the DynamicFontData and configuration such as size.
-
-
Now only 1 resource is needed: FontFile which is the .ttf file itself or a godot-created resource. There is also a FontVariation option, which takes a FontFile and looks like its used to create fallback options for fonts. The configuration (such as size) is no longer held in the font resource, but rather in the parent control node (like a Label). Double clicking on the .ttf file and disabling antialiasing and compression is something that might be needed. Optionally create a FontLabelSettings which will hold the .ttf file and used as base for Labels. Use “Make Unique” for different sizes. Another option is to use Themes and Variations.
-
I also created the respective volume button and slider UI for the added audio functionality as well as creating a base Label to avoid repeating configuration on each Label node.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/g/flappybird_godot_devlog_3.html b/live/blog/g/flappybird_godot_devlog_3.html
deleted file mode 100644
index 953eee4..0000000
--- a/live/blog/g/flappybird_godot_devlog_3.html
+++ /dev/null
@@ -1,292 +0,0 @@
-
-
-
-
-
-
-Final improvements to the FlappyBird clone and Android support devlog 3 -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Final improvements to the FlappyBird clone and Android support devlog 3
-
-
Decided to conclude my FlappyBird journey with one last set of improvements, following up on devlogs 1 and 2. Focusing on refactoring, better UI, sprite selection and Android support.
-
I missed some features that I really wanted to get in but I’m already tired of working on this toy project and already eager to move to another one. Most of the features I wanted to add are just QoL UI enhancements and extra buttons basically.
The first part for my refactor was to move everything out of the src/ directory into the root directory of the git repository, organizing it a tiny bit better, personal preference from what I’ve learned so far. I also decided to place all the raw aseprite assets next to the imported one, this way its easier to make modifications and then save directly in the same directory. Also, a list of other refactoring done:
-
-
The way I handled the gameplay means that I needed to make the camera, background and the (ceiling and tiles) “detectors” move along with the player, while restricting their movement in the x axis, really hacky. Instead, I did what I should’ve done from the beginning… just let the tiles move backwards and keep everything static with the player only moving up an down (as how I stated at the beginning of FlappyBirdgodot devlog 1 but didn’t actually follow).
-
Moved the set_process methodology to their own scripts, instead of handling everything in main.gd while also taking advantage of how signals work now. Instead of doing:
func _ready():
- Event.game_pause.connect(set_process)
- # and when the signal doesn't send anything:
- Event.game_start.connect(set_process.bind(true))
- Event.game_over.connect(set_process.bind(false))
-
First thing was to add a moving background functionality, by adding 2 of the same Sprite2D‘s one after another and everytime the first sprite moves out of the screen, position it right after the second sprite. Some sample code to accomplish this:
-
func _ready():
- # Sprite2D and CompressedTexture2D nodes
- background_orig.texture = background_texture
- texture_size = background_orig.texture.get_size()
-
- backgrounds.append(background_orig.duplicate())
- backgrounds.append(background_orig.duplicate())
- backgrounds[1].position = background_orig.position + Vector2(texture_size.x, 0.0)
-
- add_child(backgrounds[0])
- add_child(backgrounds[1])
- background_orig.visible = false
-
-# ifirst (index first) it's a boolean value starting with false and
-# its a hacky way of tracking the first sprites
-# (the one closest to the left of the screen) in the array
-func _process(delta: float):
- for background in backgrounds:
- background.move_local_x(- SPEED * delta)
-
- # moves the sprite that just exited the screen to the right of the upcoming sprite
- if backgrounds[int(ifirst)].position.x <= - background_orig.position.x:
- backgrounds[int(ifirst)].position.x = backgrounds[int(!ifirst)].position.x + texture_size.x
- ifirst = !ifirst
-
-
Then I added background parallax by separating the background sprites in two: background and “foreground” (the buildings in the original sprites). And to move them separately just applied the same logic described above with 2 different speeds.
Also added a way to select between the bird sprites and the backgrounds, currently pretty primitive but functional. Accomplished this by holding textures in an exported array, then added a bit of logic to cycle between them (example for the background):
-
func _get_new_sprite_index(index: int) -> int:
- return clampi(index, 0, background_textures.size() - 1)
-
-
-func _set_sprites_index(index: int) -> int:
- var new_index: int = _get_new_sprite_index(index)
- if new_index == itexture:
- return new_index
- for bg in backgrounds:
- bg.texture = background_textures[new_index]
- for fg in foregrounds:
- fg.texture = foreground_textures[new_index]
- itexture = new_index
- return new_index
-
-
Then, in custom signals I just call _set_sprites_index with a texture_index +/- 1.
The attributes/config/saved data can be retrieved directly by the data_resource.gd variable name, for example: instead of _data.get_value(SCORE_SECTION, "high_score") it’s now simply _data.high_score. And similar for setting the values.
-
-
Compared to the 3.x version it is a lot more simple. Though I still have setters and getters for each attribute/config (I’ll se how to change this in the future).
I did add android support but it’s been so long since I did it that I actually don’t remember (this entry has been sitting in a draft for months). In general I followed the official guide for Exporting for Android, setting up Android studio and remotely debugging with my personal phone; it does take a while to setup but after that it’s as simple as doing “one click deploys”.
-
Most notably, I had to enable touch screen support and make the buttons clickable either by an actual mouse click or touch input. Some of the Project Settings that I remember that needs changes are:
-
-
display/window/handheld/orientation set to Portrait.
-
input_devices/pointing/emulate_touch_from_mouse and input_devices/pointing/emulate_mouse_from_touch both set to on.
Found a bug on the ScoreDetector where it would collide with the Ceiling. While this is really not a problem outside of me doing tests I fixed it by applying the correct layer/mask.
The first time I learned about Godot’s collision layers and masks (will refer to them just as layers) I thought I understood them only to find out that they’re a bit confusing when trying to figure out interactions between objects that are supposed to detect each other. On my last entry where I ported the FlappyBird clone to Godot 4.1 I stumbled upon an issue with the bird not colliding properly with the pipes and the ceiling detector not… well, detecting.
-
At the end of the day the issue wasn’t that the layers weren’t properly setup but rather that the API to change the state of the collision layers changed between Godot 3 and Godot 4: when calling set_collision_layer_value (or .._mask) instead of specifying the bit which starts at 0, the layer_number is required that happens to start at 1. This was a headache for like an hour and made me realise that I didn’t understand layers that well or else I would’ve picked the error almost instantly.
-
While researching I found two really good short explainations that helped me grasp the concepts better in the same post, the first a bit technical (by Bojidar Marinov):
-
-
If enemy’s mask and object’s mask are set to 0 (i.e. no layers), they will still collide with the player, because the player’s mask still includes their respective layers.
-
Overall, if the objects are A and B, the check for collision is A.mask & B.layers || B.mask & A.layers, where & is bitwise-and, and || is the or operator. I.e. it takes the layers that correspond to the other object’s mask, and checks if any of them is on in both places. It will then proceed to check it the other way around, and if any of the two tests passes, it would report the collision.
-
-
And the second, shorter and less technical but still powerful (in the same post linking back to Godot 3.0: Using KinematicBody2D):
-
-
collision_layer describes the layers that the object appears in. By default, all bodies are on layer 1.
-
collision_mask describes what layers the body will scan for collisions. If an object isn’t in one of the mask layers, the body will ignore it. By default, all bodies scan layer 1.
-
-
While the complete answer is the first, as that is how layers work, the second can be used like a rule: 1) the layer is where the object lives, while 2) the mask is what the object will detect.
One of my first issues when starting a project is how to structure everything. So I had to spend some time researching best practices and go with what I like the most and after trying some of them I wanted to write down somewhere what I’m sticking with.
-
The first place to look for is, of course, the official Godot documentation on Project organization; along with project structure discussion, also comes with best practices for code style and what-not. I don’t like this project/directory structure that much, just because it tells you to bundle everything under the same directory but it’s a really good starting point, for example it tells you to use:
-
-
/models/town/house/
-
house.dae
-
window.png
-
door.png
-
-
-
-
Where I would prefer to have more modularity, for example:
It might look like it’s more work, but I prefer it like this. I wish this site was still available, as I got most of my ideas from there and was a pretty good resource, but apparently the owner is not maintaining his site anymore; but there is this excelent comment on reddit which shows a project/directory structure more in line with what I’m currently using (and similr to the site that is down that I liked). I ended up with:
-
-
/.git
-
/assets (raw assets/editable assets/asset packs)
-
/releases (executables ready to publish)
-
/src (the actual godot project)
-
.godot/
-
actors/ (or entities)
-
player/
-
sprites/
-
player.x
-
…
-
-
-
enemy/ (this could be a dir with subdirectories for each type of enemy for example…)
-
sprites/
-
enemy.x
-
…
-
-
-
actor.x
-
…
-
-
-
levels/ (or scenes)
-
common/
-
sprites/
-
…
-
-
-
main/
-
…
-
-
-
overworld/
-
…
-
-
-
dugeon/
-
…
-
-
-
Game.tscn (I’m considering the “Game” as a level/scene)
-
game.gd
-
-
-
objects/
-
box/
-
…
-
-
-
…
-
-
-
screens/
-
main_menu/
-
…
-
-
-
…
-
-
-
globals/ (singletons/autoloads)
-
ui/
-
menus/
-
…
-
-
-
…
-
-
-
sfx/
-
…
-
-
-
vfx/
-
…
-
-
-
etc/
-
…
-
-
-
Main.tscn (the entry point of the game)
-
main.gd
-
icon.png (could also be on a separate “icons” directory)
-
project.godot
-
…
-
-
-
\<any other repository related files>
-
-
And so on, I hope the idea is clear. I’ll probably change my mind on the long run, but for now this has been working fine.
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/g/gogodot_jam3_devlog_1.html b/live/blog/g/gogodot_jam3_devlog_1.html
deleted file mode 100644
index f73bb27..0000000
--- a/live/blog/g/gogodot_jam3_devlog_1.html
+++ /dev/null
@@ -1,783 +0,0 @@
-
-
-
-
-
-
-Creating my Go Godot Jam 3 entry using Godot 3.5 devlog 1 -- Luévano's Blog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Creating my Go Godot Jam 3 entry using Godot 3.5 devlog 1
-
-
The jam’s theme is Evolution and all the details are listed here. This time I’m logging as I go, so there might be some changes to the script or scenes along the way. I couldn’t actually do this, as I was running out of time. Note that I’m not going to go into much details, the obvious will be ommitted.
-
I wanted to do a Snake clone, and I’m using this jam as an excuse to do it and add something to it. The features include:
-
-
Snakes will pass their stats in some form to the next snakes.
-
Non-grid snake movement. I just hate the grid constraint, so I wanted to make it move in any direction.
-
Depending on the food you eat, you’ll gain new mutations/abilities and the more you eat the more that mutation develops didn’t have time to add this feature, sad.
-
Procedural map creation.
-
-
I created this game using Godot 3.5-rc3. You can find the source code in my GitHub here which at the time of writing this it doesn’t contain any exported files, for that you can go ahead and play it in your browser at itch.io, which you can find below:
Again, similar to the FlappyBird clone I created, I’m using the directory structure I wrote about on Godot project structure with slight modifications to test things out. Also using similar Project settings as those from the FlappyBird clone like the pixel art texture imports, keybindings, layers, etc..
-
I’ve also setup GifMaker, with slight modifications as the AssetLib doesn’t install it correctly and contains unnecessry stuff: moved necessary files to the res://addons directory, deleted test scenes and files in general, and copied the license to the res://docs directory. Setting this up was a bit annoying because the tutorial it’s bad (with all due respect). I might do a separate entry just to explain how to set it up, because I couldn’t find it anywhere other than by inspecting some of the code/scenes. I ended up leaving this disabled in the game as it hit the performance by a lot, but it’s an option I’ll end up researching more.
-
This time I’m also going to be using an Event bus singleton (which I’m going to just call Event) as managing signals was pretty annoying on my last project; as well as a Global singleton for essential stuff so I don’t have to do as many cross references between nodes/scenes.
This is the most challenging part in my opinion as making all the body parts follow the head in a user defined path it’s kinda hard. I tried with like 4-5 options and the one I’m detailing here is the only one that worked as I wanted for me. This time the directory structure I’m using is the following:
The most basic thing is to move the head, this is what we have control of. Create a scene called Head.tscn and setup the basic KinematicBody2D with it’s own Sprite and CollisionShape2D (I used a small circle for the tip of the head), and set the Collision Layer/Mask accordingly, for now just layer = bit 1. And all we need to do, is keep moving the snake forwards and be able to rotate left or right. Created a new script called head.gd attached to the root (KinematicBody2D) and added:
To move other snake parts by following the snake head the only solution I found was to use the Path2D and PathFollow2D nodes. Path2D basically just handles the curve/path that PathFollow2D will use to move its child node; and I say “child node” in singular… as PathFollow2D can only handle one damn child, all the other ones will have weird transformations and/or rotations. So, the next thing to do is to setup a way to compute (and draw so we can validate) the snake’s path/curve.
-
Added the signal snake_path_new_point(coordinates) to the Event singleton and then add the following to head.gd:
-
var _time_elapsed: float = 0.0
-
-# using a timer is not recommended for < 0.01
-func _handle_time_elapsed(delta: float) -> void:
- if _time_elapsed >= Global.SNAKE_POSITION_UPDATE_INTERVAL:
- Event.emit_signal("snake_path_new_point", global_position)
- _time_elapsed = 0.0
- _time_elapsed += delta
-
-
This will be pinging the current snake head position every 0.01 seconds (defined in Global). Now create a new scene called Snake.tscn which will contain a Node2D, a Path2D and an instance of Head as its childs. Create a new script called snake.gd attached to the root (Node2D) with the following content:
-
class_name Snake
-extends Node2D
-
-onready var path: Path2D = $Path
-
-func _ready():
- Event.connect("snake_path_new_point", self, "_on_Head_snake_path_new_point")
-
-
-func _draw() -> void:
- if path.curve.get_baked_points().size() >= 2:
- draw_polyline(path.curve.get_baked_points(), Color.aquamarine, 1, true)
-
-
-func _on_Head_snake_path_new_point(coordinates: Vector2) -> void:
- path.curve.add_point(coordinates)
- # update call is to draw curve as there are new points to the path's curve
- update()
-
-
With this, we’re now populating the Path2D curve points with the position of the snake head. You should be able to see it because of the _draw call. If you run it you should see something like this:
At this point the only thing to do is to add the corresponding next body parts and tail of the snake. To do so, we need a PathFollow2D to use the live-generating Path2D, the only caveat is that we need one of these per body part/tail (this took me hours to figure out, thanks documentation).
-
Create a new scene called Body.tscn with a PathFollow2D as its root and an Area2D as its child, then just add the necessary Sprite and CollisionShap2D for the Area2D, I’m using layer = bit 2 for its collision. Create a new script called generic_segment.gd with the following code:
And this can be attached to the Body‘s root node (PathFollow2D), no extra setup needed. Repeat the same steps for creating the Tail.tscn scene and when attaching the generic_segment.gd script just configure the Type parameter to tail in the GUI (by selecting the node with the script attached and editing in the Inspector).
Now it’s just a matter of handling when to add new body parts in the snake.gd script. For now I’ve only setup for adding body parts to fulfill the initial length of the snake (this doesn’t include the head or tail). The extra code needed is the following:
-
export(PackedScene) var BODY_SEGMENT_NP: PackedScene
-export(PackedScene) var TAIL_SEGMENT_NP: PackedScene
-
-var current_body_segments: int = 0
-var max_body_segments: int = 1
-
-
-func _add_initial_segment(type: PackedScene) -> void:
- if path.curve.get_baked_length() >= (current_body_segments + 1.0) * Global.SNAKE_SEGMENT_SIZE:
- var _temp_body_segment: PathFollow2D = type.instance()
- path.add_child(_temp_body_segment)
- current_body_segments += 1
-
-
-func _on_Head_snake_path_new_point(coordinates: Vector2) -> void:
- path.curve.add_point(coordinates)
- # update call is to draw curve as there are new points to the path's curve
- update()
-
- # add the following lines
- if current_body_segments < max_body_segments:
- _add_initial_segment(BODY_SEGMENT_NP)
- elif current_body_segments == max_body_segments:
- _add_initial_segment(TAIL_SEGMENT_NP)
-
-
Select the Snake node and add the Body and Tail scene to the parameters, respectively. Then when running you should see something like this:
-
-
Now, we need to handle adding body parts after the snake is complete and already moved for a bit, this will require a queue so we can add part by part in the case that we eat multiple pieces of food in a short period of time. For this we need to add some signals: snake_adding_new_segment(type), snake_added_new_segment(type), snake_added_initial_segments and use them when makes sense. Now we need to add the following:
-
var body_segment_stack: Array
-var tail_segment: PathFollow2D
-# didn't konw how to name this, basically holds the current path lenght
-# whenever the add body segment, and we use this stack to add body parts
-var body_segment_queue: Array
-
-
As well as updating _add_initial_segment with the following so it adds the new segment on the specific variable:
Now that it’s just a matter of creating the segment queue whenever a new segment is needed, as well as adding each segment in a loop whenever we have items in the queue and it’s a good distance to place the segment on. These two things can be achieved with the following code:
-
# this will be called in _physics_process
-func _add_new_segment() -> void:
- var _path_length_threshold: float = body_segment_queue[0] + Global.SNAKE_SEGMENT_SIZE
- if path.curve.get_baked_length() >= _path_length_threshold:
- var _removed_from_queue: float = body_segment_queue.pop_front()
- var _temp_body_segment: PathFollow2D = BODY_SEGMENT_NP.instance()
- var _new_body_offset: float = body_segment_stack.back().offset - Global.SNAKE_SEGMENT_SIZE
-
- _temp_body_segment.offset = _new_body_offset
- body_segment_stack.append(_temp_body_segment)
- path.add_child(_temp_body_segment)
- tail_segment.offset = body_segment_stack.back().offset - Global.SNAKE_SEGMENT_SIZE
-
- current_body_segments += 1
-
-
-func _add_segment_to_queue() -> void:
- # need to have the queues in a fixed separation, else if the eating functionality
- # gets spammed, all next bodyparts will be spawned almost at the same spot
- if body_segment_queue.size() == 0:
- body_segment_queue.append(path.curve.get_baked_length())
- else:
- body_segment_queue.append(body_segment_queue.back() + Global.SNAKE_SEGMENT_SIZE)
-
-
With everything implemented and connected accordingly then we can add segments on demand (for testing I’m adding with a key press), it should look like this:
-
-
For now, this should be enough, I’ll add more stuff as needed as I go. Last thing is that after finished testing that the movement felt ok, I just added a way to stop the snake whenever it collides with itself by using the following code (and the signal snake_segment_body_entered(body)) in a main.gd script that is the entry point for the game:
After a while of testing and developing, I noticed that sometimes the head “detaches” from the body when a lot of rotations happen (moving the snake left or right), because of how imprecise the Curve2D is. To do this I just send a signal (snake_rotated) whenever the snake rotates and make a small correction (in generic_segment.gd):
For now I just decided to setup a simple system to see everything works fine. The idea is to make some kind of generic food node/scene and a “food manager” to spawn them, for now in totally random locations. For this I added the following signals: food_placing_new_food(type), food_placed_new_food(type) and food_eaten(type).
-
First thing is creating the Food.tscn which is just an Area2D with its necessary children with an attached script called food.gd. The script is really simple:
-
class_name Food # needed to access Type enum outside of the script, this registers this script as a node
-extends Area2D
-
-enum Type {
- APPLE
-}
-
-var _type_texture: Dictionary = {
- Type.APPLE: preload("res://entities/food/sprites/apple.png")
-}
-
-export(Type) var TYPE
-onready var _sprite: Sprite = $Sprite
-
-
-func _ready():
- connect("body_entered", self, "_on_body_entered")
- _sprite.texture = _type_texture[TYPE]
-
-
-func _on_body_entered(body: Node) -> void:
- Event.emit_signal("food_eaten", TYPE)
- queue_free()
-
-
Then this food_eaten signal is received in snake.gd to add a new segment to the queue.
-
Finally, for the food manager I just created a FoodManager.tscn with a Node2D with an attached script called food_manager.gd. To get a random position:
Which gets the job done, but later I’ll have to add a way to check that the position is valid. And to actually place the food:
-
func _place_new_food() -> void:
- var food: Area2D = FOOD.instance()
- var position: Vector2 = _get_random_pos()
- food.global_position = position
- add_child(food)
-
-
And this is used in _process to place new food whenever needed. For now I added a condition to add food until 10 pieces are in place, and keep adding whenever the food is is lower than 10. After setting everything up, this is the result:
It just happend that I saw a video to create random maps by using a method called random walks, this video was made by NAD LABS: Nuclear Throne Like Map Generation In Godot. It’s a pretty simple but powerful script, he provided the source code from which I based my random walker, just tweaked a few things and added others. Some of the maps than can be generated with this method (already aded some random sprites):
-
-
-
-
It started with just black and white tiles, but I ended up adding some sprites as it was really harsh to the eyes. My implementation is basically the same as NAD LABS‘ with few changes, most importantly: I separated the generation in 2 diferent tilemaps (floor and wall) to have better control as well as wrapped everything in a single scene with a “main” script with the following important functions:
Where get_cells_around is just a function that gets the safe cells around the origin. And this get_valid_map_coords just returns used cells minus the safe cells, to place food. get_centered_world_position is so we can center the food in the tiles.
-
Some signals I used for the world gen: world_gen_walker_started(id), world_gen_walker_finished(id), world_gen_walker_died(id) and world_gen_spawn_walker_unit(location).
The last food algorithm doesn’t check anything related to the world, and thus the food could spawn in the walls and outside the map.
-
First thing is I generalized the food into a single script and added basic food and special food which inherit from base food. The most important stuff for the base food is to be able to set all necessary properties at first:
Where the update_texture needs to be a separate function, because we need to create the food first, set properties, add as a child and then update the sprite; we also need to keep track of the global position, location (in tilemap coordinates) and identifiers for the type of food.
-
Then basic/special food just extend base food, define a Type enum and preloads the necessary textures, for example:
Now, some of the most important change to food_manager.gd is to get an actual random valid position:
-
func _get_random_pos() -> Array:
- var found_valid_loc: bool = false
- var index: int
- var location: Vector2
-
- while not found_valid_loc:
- index = randi() % possible_food_locations.size()
- location = possible_food_locations[index]
- if current_basic_food.find(location) == -1 and current_special_food.find(location) == -1:
- found_valid_loc = true
-
- return [world_generator.get_centered_world_position(location), location]
-
-
Other than that, there are some differences between placing normal and special food (specially the signal they send, and if an extra “special points” property is set). Some of the signals that I used that might be important: food_placing_new_food(type), food_placed_new_food(type, location) and food_eaten(type, location).
I got the idea of saving the current stats (points, max body segments, etc.) in a separate Stats class for easier load/save data. This option I went with didn’t work as I would liked it to work, as it was a pain in the ass to setup and each time a new property is added you have to manually setup the load/save helper functions… so not the best option. This option I used was json but saving a Node directly could work better or using resources (saving tres files).
The load/save function is pretty standard. It’s a singleton/autoload called SavedData with a script that extends from Node called save_data.gd:
-
const DATA_PATH: String = "user://data.save"
-
-var _stats: Stats
-
-
-func _ready() -> void:
- _load_data()
-
-
-# called when setting "stats" and thus saving
-func save_data(stats: Stats) -> void:
- _stats = stats
- var file: File = File.new()
- file.open(DATA_PATH, File.WRITE)
- file.store_line(to_json(_stats.get_stats()))
- file.close()
-
-
-func get_stats() -> Stats:
- return _stats
-
-
-func _load_data() -> void:
- # create an empty file if not present to avoid error while loading settings
- _handle_new_file()
-
- var file = File.new()
- file.open(DATA_PATH, File.READ)
- _stats = Stats.new()
- _stats.set_stats(parse_json(file.get_line()))
- file.close()
-
-
-func _handle_new_file() -> void:
- var file: File = File.new()
- if not file.file_exists(DATA_PATH):
- file.open(DATA_PATH, File.WRITE)
- _stats = Stats.new()
- file.store_line(to_json(_stats.get_stats()))
- file.close()
-
-
It uses json as the file format, but I might end up changing this in the future to something else more reliable and easier to use (Stats class related issues).
For this I created a scoring mechanisms and just called it ScoreManager (score_manager.gd) which just basically listens to food_eaten signal and adds points accordingly to the current Stats object loaded. The main function is:
-
func _on_food_eaten(properties: Dictionary) -> void:
- var is_special: bool = properties["special"]
- var type: int = properties["type"]
- var points: int = properties["points"]
- var special_points: int = properties["special_points"]
- var location: Vector2 = properties["global_position"]
- var amount_to_grow: int
- var special_amount_to_grow: int
-
- amount_to_grow = _process_points(points)
- _spawn_added_score_text(points, location)
- _spawn_added_segment_text(amount_to_grow)
-
- if is_special:
- special_amount_to_grow = _process_special_points(special_points, type)
- # _spawn_added_score_text(points, location)
- _spawn_added_special_segment_text(special_amount_to_grow, type)
- _check_if_unlocked(type)
-
-
Where the most important function is:
-
func _process_points(points: int) -> int:
- var score_to_grow: int = (stats.segments + 1) * Global.POINTS_TO_GROW - stats.points
- var amount_to_grow: int = 0
- var growth_progress: int
- stats.points += points
- if points >= score_to_grow:
- amount_to_grow += 1
- points -= score_to_grow
- # maybe be careful with this
- amount_to_grow += points / Global.POINTS_TO_GROW
- stats.segments += amount_to_grow
- Event.emit_signal("snake_add_new_segment", amount_to_grow)
-
- growth_progress = Global.POINTS_TO_GROW - ((stats.segments + 1) * Global.POINTS_TO_GROW - stats.points)
- Event.emit_signal("snake_growth_progress", growth_progress)
- return amount_to_grow
-
-
Which will add the necessary points to Stats.points and return the amount of new snake segments to grow. After this _spawn_added_score_segment and _spawn_added_segment_text just spawn a Label with the info on the points/segments gained; this is custom UI I created, nothing fancy.
-
Last thing is taht in _process_points there is a check at the end, where if the food eaten is “special” then a custom variation of the last 3 functions are executed. These are really similar, just specific to each kind of food.
-
This ScoreManager also handles the calculation for the game_over signal, to calculte progress, set necessary Stats values and save the data:
-
func _on_game_over() -> void:
- var max_stats: Stats = _get_max_stats()
- SaveData.save_data(max_stats)
- Event.emit_signal("display_stats", initial_stats, stats, mutation_stats)
-
-
-func _get_max_stats() -> Stats:
- var old_stats_dict: Dictionary = initial_stats.get_stats()
- var new_stats_dict: Dictionary = stats.get_stats()
- var max_stats: Stats = Stats.new()
- var max_stats_dict: Dictionary = max_stats.get_stats()
- var bool_stats: Array = [
- "trait_dash",
- "trait_slow",
- "trait_jump"
- ]
-
- for i in old_stats_dict:
- if bool_stats.has(i):
- max_stats_dict[i] = old_stats_dict[i] or new_stats_dict[i]
- else:
- max_stats_dict[i] = max(old_stats_dict[i], new_stats_dict[i])
- max_stats.set_stats(max_stats_dict)
- return max_stats
-
-
Then this sends a signal display_stats to activate UI elements that shows the progression.
-
Naturally, the saved Stats are loaded whenever needed. For example, for the Snake, we load the stats and setup any value needed from there (like a flag to know if any ability is enabled), and since we’re saving the new Stats at the end, then on restart we load the updated one.
I redesigned the snake code (the head, actually) to use the state machine pattern by following this guide which is definitely a great guide, straight to the point and easy to implement.
-
Other than what is shown in the guide, I implemented some important functions in the state_machine.gd script itself, to be used by each of the states as needed:
func _physics_process(delta: float) -> void:
- # state specific code, move_and_slide is called here
- if state.has_method("physics_process"):
- state.physics_process(delta)
-
- handle_slow_speeds()
- player.handle_time_elapsed(delta)
-
-
And now it’s just a matter of implementing the necessary states. I used 4: normal_stage.gd, slow_state.gd, dash_state.gd and jump_state.gd.
-
The normal_state.gd contains what the original head.gd code contained:
-
func physics_process(delta: float) -> void:
- fsm.rotate_on_input()
- fsm.player.velocity = fsm.player.direction * Global.SNAKE_SPEED
- fsm.player.velocity = fsm.player.move_and_slide(fsm.player.velocity)
-
- fsm.slow_down_on_collisions(Global.SNAKE_SPEED_BACKUP)
-
-
-func input(event: InputEvent) -> void:
- if fsm.player.can_dash and event.is_action_pressed("dash"):
- exit("DashState")
- if fsm.player.can_slow and event.is_action_pressed("slow"):
- exit("SlowState")
- if fsm.player.can_jump and event.is_action_pressed("jump"):
- exit("JumpState")
-
-
Here, the exit method is basically to change to the next state. And lastly, I’m only gonna show the dash_state.gd as the other ones are pretty similar:
Where the important parts happen in the enter and exit functions. We need to change the Global.SNAKE_SPEED with the Global.SNAKE_DASH_SPEED on startand start the timer for how long should the dash last. And on the exit we reset the Global.SNAKE_SPEED back to normal. There is probably a better way of updating the Global.SNAKE_SPEED but this works just fine.
-
For the other ones is the same. Only difference with the jump_state.gd is that the collision from head to body is disabled, and no rotation is allowed (by not calling the rotate_on_input function).
I actually didn’t finish this game (as how I visualized it), but I got it in a semi-playable state which is good. My big learning during this jam is the time management that it requires to plan and design a game. I lost a lot of time trying to implement some mechanics because I was facing many issues, because of my lack of practice (which was expected) as well as trying to blog and create the necessary sprites myself. Next time I should just get an asset pack and do something with it, as well as keeping the scope of my game shorter.
I’ve been wanting to get into gamedev for a while now, but it’s always a pain to stay consistent. I just recently started to get into it again, and this time I’m trying to actually do stuff.
-
So, the plan is to blog about my progress and clone some simple games just to get started. I’m thinking on sticking with Godot just because I like that it’s open source, it’s getting better and better overtime (big rewrite happening right now) and I already like how the engine works. Specifically I’ll start using Godot 4 even though it’s not done yet, to get used to the new features, specifically pumped for GDScript 2.0. Actually… (for the small clones/ripoffs) I’ll need to use Godot 3.X (probably 3.5), as Godot 4 doesn’t have support to export to webassembly (HTML5) yet, and I want that to publish to itch.io and my website. I’ll continue to use Godot 4 for bigger projects, as they will take longer and I hope that by the time I need to publish, there’s no issues to export.
-
For a moment I almost started a new subdomain just for gamedev stuff, but decided to just use a different directory for subtleness; this directory and use of tags should be enough. I’ll be posting the entry about the first rip-off I’m developing (FlappyBird L O L) shortly.
-
Update: Godot 4 already released and it now has HTML5 support, so back to the original plan.
Welcome to my blog where I post random stuff such as rants, how to setup services on a vps server or lately, gamedev stuff. I usually post in English, but sometimes will do so in Spanish, specially when ranting over some IRL stuff.
-
-
-
-
-
-
\ No newline at end of file
diff --git a/live/blog/rss.xml b/live/blog/rss.xml
deleted file mode 100644
index 775006f..0000000
--- a/live/blog/rss.xml
+++ /dev/null
@@ -1,5070 +0,0 @@
-
-
-
- Luévano's Blog
- https://blog.luevano.xyz
-
- My personal blog where I post about my thoughts, some how-to's, or general ranting.
- en-us
-
- Copyright 2023 David Luévano Alvarado
- david@luevano.xyz (David Luévano Alvarado)
- david@luevano.xyz (David Luévano Alvarado)
-
-
- pyssg v0.9.0
- https://validator.w3.org/feed/docs/rss2.html
- 30
-
- https://static.luevano.xyz/images/b/default.png
- Luévano's Blog
- https://blog.luevano.xyz
-
-
- Final improvements to the FlappyBird clone and Android support devlog 3
- https://blog.luevano.xyz/g/flappybird_godot_devlog_3.html
- https://blog.luevano.xyz/g/flappybird_godot_devlog_3.html
- Fri, 01 Mar 2024 10:00:57 GMT
- English
- Gamedev
- Gdscript
- Godot
- Notes on the final improvements to my FlappyBird clone made in Godot 4.x. Also details on the support for Android.
- Decided to conclude my FlappyBird journey with one last set of improvements, following up on devlogs 1 and 2. Focusing on refactoring, better UI, sprite selection and Android support.
-
I missed some features that I really wanted to get in but I’m already tired of working on this toy project and already eager to move to another one. Most of the features I wanted to add are just QoL UI enhancements and extra buttons basically.
The first part for my refactor was to move everything out of the src/ directory into the root directory of the git repository, organizing it a tiny bit better, personal preference from what I’ve learned so far. I also decided to place all the raw aseprite assets next to the imported one, this way its easier to make modifications and then save directly in the same directory. Also, a list of other refactoring done:
-
-
The way I handled the gameplay means that I needed to make the camera, background and the (ceiling and tiles) “detectors” move along with the player, while restricting their movement in the x axis, really hacky. Instead, I did what I should’ve done from the beginning… just let the tiles move backwards and keep everything static with the player only moving up an down (as how I stated at the beginning of FlappyBirdgodot devlog 1 but didn’t actually follow).
-
Moved the set_process methodology to their own scripts, instead of handling everything in main.gd while also taking advantage of how signals work now. Instead of doing:
func _ready():
- Event.game_pause.connect(set_process)
- # and when the signal doesn't send anything:
- Event.game_start.connect(set_process.bind(true))
- Event.game_over.connect(set_process.bind(false))
-
First thing was to add a moving background functionality, by adding 2 of the same Sprite2D‘s one after another and everytime the first sprite moves out of the screen, position it right after the second sprite. Some sample code to accomplish this:
-
func _ready():
- # Sprite2D and CompressedTexture2D nodes
- background_orig.texture = background_texture
- texture_size = background_orig.texture.get_size()
-
- backgrounds.append(background_orig.duplicate())
- backgrounds.append(background_orig.duplicate())
- backgrounds[1].position = background_orig.position + Vector2(texture_size.x, 0.0)
-
- add_child(backgrounds[0])
- add_child(backgrounds[1])
- background_orig.visible = false
-
-# ifirst (index first) it's a boolean value starting with false and
-# its a hacky way of tracking the first sprites
-# (the one closest to the left of the screen) in the array
-func _process(delta: float):
- for background in backgrounds:
- background.move_local_x(- SPEED * delta)
-
- # moves the sprite that just exited the screen to the right of the upcoming sprite
- if backgrounds[int(ifirst)].position.x <= - background_orig.position.x:
- backgrounds[int(ifirst)].position.x = backgrounds[int(!ifirst)].position.x + texture_size.x
- ifirst = !ifirst
-
-
Then I added background parallax by separating the background sprites in two: background and “foreground” (the buildings in the original sprites). And to move them separately just applied the same logic described above with 2 different speeds.
Also added a way to select between the bird sprites and the backgrounds, currently pretty primitive but functional. Accomplished this by holding textures in an exported array, then added a bit of logic to cycle between them (example for the background):
-
func _get_new_sprite_index(index: int) -> int:
- return clampi(index, 0, background_textures.size() - 1)
-
-
-func _set_sprites_index(index: int) -> int:
- var new_index: int = _get_new_sprite_index(index)
- if new_index == itexture:
- return new_index
- for bg in backgrounds:
- bg.texture = background_textures[new_index]
- for fg in foregrounds:
- fg.texture = foreground_textures[new_index]
- itexture = new_index
- return new_index
-
-
Then, in custom signals I just call _set_sprites_index with a texture_index +/- 1.
The attributes/config/saved data can be retrieved directly by the data_resource.gd variable name, for example: instead of _data.get_value(SCORE_SECTION, "high_score") it’s now simply _data.high_score. And similar for setting the values.
-
-
Compared to the 3.x version it is a lot more simple. Though I still have setters and getters for each attribute/config (I’ll se how to change this in the future).
I did add android support but it’s been so long since I did it that I actually don’t remember (this entry has been sitting in a draft for months). In general I followed the official guide for Exporting for Android, setting up Android studio and remotely debugging with my personal phone; it does take a while to setup but after that it’s as simple as doing “one click deploys”.
-
Most notably, I had to enable touch screen support and make the buttons clickable either by an actual mouse click or touch input. Some of the Project Settings that I remember that needs changes are:
-
-
display/window/handheld/orientation set to Portrait.
-
input_devices/pointing/emulate_touch_from_mouse and input_devices/pointing/emulate_mouse_from_touch both set to on.
Found a bug on the ScoreDetector where it would collide with the Ceiling. While this is really not a problem outside of me doing tests I fixed it by applying the correct layer/mask.
]]>
-
-
- Godot layers and masks notes
- https://blog.luevano.xyz/g/godot_layers_and_masks_notes.html
- https://blog.luevano.xyz/g/godot_layers_and_masks_notes.html
- Tue, 29 Aug 2023 10:10:06 GMT
- English
- Gamedev
- Gdscript
- Godot
- Some notes I took regarding Godot's confusing collision layers and masks.
- The first time I learned about Godot’s collision layers and masks (will refer to them just as layers) I thought I understood them only to find out that they’re a bit confusing when trying to figure out interactions between objects that are supposed to detect each other. On my last entry where I ported the FlappyBird clone to Godot 4.1 I stumbled upon an issue with the bird not colliding properly with the pipes and the ceiling detector not… well, detecting.
-
At the end of the day the issue wasn’t that the layers weren’t properly setup but rather that the API to change the state of the collision layers changed between Godot 3 and Godot 4: when calling set_collision_layer_value (or .._mask) instead of specifying the bit which starts at 0, the layer_number is required that happens to start at 1. This was a headache for like an hour and made me realise that I didn’t understand layers that well or else I would’ve picked the error almost instantly.
-
While researching I found two really good short explainations that helped me grasp the concepts better in the same post, the first a bit technical (by Bojidar Marinov):
-
-
If enemy’s mask and object’s mask are set to 0 (i.e. no layers), they will still collide with the player, because the player’s mask still includes their respective layers.
-
Overall, if the objects are A and B, the check for collision is A.mask & B.layers || B.mask & A.layers, where & is bitwise-and, and || is the or operator. I.e. it takes the layers that correspond to the other object’s mask, and checks if any of them is on in both places. It will then proceed to check it the other way around, and if any of the two tests passes, it would report the collision.
-
-
And the second, shorter and less technical but still powerful (in the same post linking back to Godot 3.0: Using KinematicBody2D):
-
-
collision_layer describes the layers that the object appears in. By default, all bodies are on layer 1.
-
collision_mask describes what layers the body will scan for collisions. If an object isn’t in one of the mask layers, the body will ignore it. By default, all bodies scan layer 1.
-
-
While the complete answer is the first, as that is how layers work, the second can be used like a rule: 1) the layer is where the object lives, while 2) the mask is what the object will detect.
]]>
-
-
- Porting the FlappyBird clone to Godot 4.1 devlog 2
- https://blog.luevano.xyz/g/flappybird_godot_devlog_2.html
- https://blog.luevano.xyz/g/flappybird_godot_devlog_2.html
- Sun, 27 Aug 2023 23:28:10 GMT
- English
- Gamedev
- Gdscript
- Godot
- Notes on porting my FlappyBird clone to Godot 4.1, as well as notes on the improvements and changes made overall.
- As stated in my FlappyBird devlog 1 entry I originally started the clone in Godot 4, then backported back to Godot 3 because of HTML5 support, and now I’m porting it back again to Godot 4 as there is support again and I want to start getting familiar with it for future projects.
-
Disclaimer: I started the port back in Godot 4.0 something and left the project for a while, then opened the project again in Godot 4.1, and it didn’t ask to convert anything so probably nowadays the conversion is better. Godot’s documentation is pretty useful, I looked at the GDScript reference and GDScript exports and that helped a lot.
Now that the game at least runs, next thing is to make it “playable”:
-
-
AnimatedSprite changed to AnimatedSprite2D (with the inclusion of AnimatedSprite3D). This node type changed with the automatic conversion.
-
Instead of checking if an animation is playing with the the playing property, the method is_playing() needs to be used.
-
-
-
The default_gravity from the ProjectSettings no longer needs to be multiplied by 10 to have reasonable numbers. The default is now 980 instead of 98. I later changed this when refactoring the code and fine-tuning the feel of the movement.
-
The Collision mask can be changed programatically with the set_collision_mask_value (and similar with the layer). Before, the mask/layer was specified by the bit which started from 0, but now it is accessed by the layer_number which starts from 1.
This is the most challenging part as the TileMap system changed drastically, it is basically a from the ground up redesign, luckily the TileMaps I use are really simple. Since this is not intuitive from the get-go, I took some notes on the steps I took to set up the world TileMap.
Instead of using one scene per TileMap only one TileMap can be used with multiple Atlas in the TileSet. Multiple physics layers can now be used per TileSet so you can separate the physics collisions on a per Atlas or Tile basis. The inclusion of Tile patterns also helps when working with multiple Tiles for a single cell “placement”. How I did it:
-
-
Created one scene with one TileMap node, called WorldTileMap.tscn, with only one TileSet as multiple Atlas‘ can be used (this would be a single TileSet in Godot 3).
-
To add a TileSet, select the WorldTileMap and go to Inspector -> TileMap -> TileSet then click on “” and then “New TileSet” button.
-
To manipulate a TileSet, it needs to be selected, either by clicking in the Inspector section or on the bottom of the screen (by default) to the left of TileMap, as shown in the image below.
-
-
-
-
-
-
Add two Atlas to the TileSet (one for the ground tiles and another for the pipes) by clicking on the “Add” button (as shown in the image above) and then on “Atlas”.
-
By selecting an atlas and having the “Setup” selected, change the Name to something recognizable like ground and add the texture atlas (the spritesheet) by dragging and dropping in the “” Texture field, as shown in the image below. Take a not of the ID, they start from 0 and increment for each atlas, but if they’re not 0 and 1 change them.
-
-
-
-
I also like to delete unnecessary tiles (for now) by selecting the atlas “Setup” and the “Eraser” tool, as shown in the image below. Then to erase tiles just select them and they’ll be highlighted in black, once deleted they will be grayed out. If you want to activate tiles again just deselect the “Eraser” tool and select wanted tiles.
-
-
-
-
For the pipes it is a good idea to modify the “tile width” for horizontal 1x2 tiles. This can be acomplished by removing all tiles except for one, then going to the “Select” section of the atlas, selecting a tile and extending it either graphically by using the yellow circles or by using the properties, as shown in the image below.
-
-
-
-
Add physics (collisions) by selecting the WorldTileMap‘s TileSet and clicking on “Add Element” at the TileMap -> TileSet -> Physics Layer twice, one physics layer per atlas. Then set the collision’s layers and masks accordingly (ground on layer 2, pipe on 3). In my case, based on my already set layers.
-
This will enable physics properties on the tiles when selecting them (by selecting the atlas, being in the correct “Select” section and selecting a tile) and start drawing a polygon with the tools provided. This part is hard to explain in text, but below is an image of how it looks once the polygon is set.
-
-
-
-
-
- Notice that the polygon is drawn in *Physics Layer 0*. Using the grid option to either `1` or `2` is useful when drawing the polygon, make sure the polygon closes itself or it wont be drawn.
-
-
-
Create a tile pattern by drawing the tiles wanted in the editor and then going to the Patterns tab (to the right of Tiles) in the TileMap, selecting all tiles wanted in the pattern and dragging the tiles to the Patterns window. Added patterns will show in this window as shown in the image below, and assigned with IDs starting from 0.
Basically merged all 3 scripts (ground_tile_map.gd, pipe_tile_map.gd and world_tiles.gd) into one (world_tile_map.gd) and immediatly was able to delete a lot of signal calls between those 3 scripts and redundant code.
-
The biggest change in the scripting side are the functions to place tiles. For Godot 3:
# place single tile in specific cell
-void set_cell(layer: int, coords: Vector2i, source_id: int = -1, atlas_coords: Vector2i = Vector2i(-1, -1), alternative_tile: int = 0)
-# erase tile at specific cell
-void erase_cell(layer: int, coords: Vector2i)
-
-
How to use these functions in Godot 4 (new properties or differences/changes):
-
-
layer: for my case I only use 1 layer so it is always set to 0.
-
coords: would be the equivalent to position for set_cellv in Godot 3.
-
source_id: which atlas to use (ground: 0 or pipe 1).
-
atlas_coords: tile to use in the atlas. This would be the equivalent to tile in Godot 3.
-
alternative_tile: for tiles that have alternatives such as mirrored or rotated tiles, not required in my case.
-
-
Setting source_id=-1, atlas_coords=Vector21(-1,-1) or alternative_tile=-1 will delete the tile at coords, similar to just using erase_cell.
-
With the addition to Tile patterns (to place multiple tiles), there is a new function:
Where position has the same meaning as coords in set_cell/erase_cell, not sure why it has a different name. The pattern can be obtained by using get_pattern method on the tile_set property of the TileMap. Something like:
-
var pattern: TileMapPattern = tile_set.get_pattern(index)
-
-
Other than that, Vector2 needs to be changed to Vector2i.
The audio in the Godot 3 version was added in the last minute and it was blasting by default with no option to decrease the volume or mute it. To deal with this:
-
-
Refactored the code into a single scene/script to have better control.
Moved all the signal logic into an event bus to get rid of the coupling I had. This is accomplished by:
-
-
Creating a singleton (autoload) script which I called event.gd and can be accessed with Event.
-
All the signals are now defined in event.gd.
-
When a signal needs to be emited instead of emitting the signal from any particular script, emit it from the event bus with Event.<signal_name>.emit(<optional_args>).
-
When connecting to a signal instead of taking a reference to where the signal is defined, simply connect it with with Event.<signal_name>.connect(<callable>[.bind(<optional_args>)])
-
For signals that already send arguments to the callable, they do not need to be specified in bind, only extras are needed here.
Really the only UI I had before was for rendering fonts, and the way fonts work changed a bit. Before, 3 resources were needed as noted in my previous entry:
-
-
Font file itself (.ttf for example).
-
DynamicFontData: used to point to a font file (.ttf) and then used as base resource.
-
DynamicFont: usable in godot control nodes which holds the DynamicFontData and configuration such as size.
-
-
Now only 1 resource is needed: FontFile which is the .ttf file itself or a godot-created resource. There is also a FontVariation option, which takes a FontFile and looks like its used to create fallback options for fonts. The configuration (such as size) is no longer held in the font resource, but rather in the parent control node (like a Label). Double clicking on the .ttf file and disabling antialiasing and compression is something that might be needed. Optionally create a FontLabelSettings which will hold the .ttf file and used as base for Labels. Use “Make Unique” for different sizes. Another option is to use Themes and Variations.
-
I also created the respective volume button and slider UI for the added audio functionality as well as creating a base Label to avoid repeating configuration on each Label node.
Updated @export to @export_range. The auto conversion didn’t use the correct annotation and instead used a comment.
-
Refactored the game_scale methodolgy as it was inconsistent. Now only one size is used as base and everything else is just scaled with the rootWindow.
-
Got rid of the FPS monitoring, was only using it for debugging purposes back then.
-
]]>
-
-
- Set up a pastebin alternative with PrivateBin and YOURLS
- https://blog.luevano.xyz/a/pastebin_alt_with_privatebin.html
- https://blog.luevano.xyz/a/pastebin_alt_with_privatebin.html
- Sun, 20 Aug 2023 09:46:33 GMT
- Code
- English
- Server
- Tools
- Tutorial
- How to set up a pastebin alternative with PrivateBin and YOURLS as shortener, on Arch.
- I learned about PrivateBin a few weeks back and ever since I’ve been looking into installing it, along with a URL shortener (a service I wanted to self host since forever). It took me a while as I ran into some problems while experimenting and documenting all the necessary bits in here.
-
My setup is exposed to the public, and as always is heavily based on previous entries as described in Prerequisites. Descriptions on setting up MariaDB (preferred MySQL replacement for Arch) and PHP are written in this entry as this is the first time I’ve needed them.
-
Everything here is performed in arch btw and all commands should be run as root unless stated otherwise.
To use mariadb simply run the command and it will try to login with the corresponding linux user running it. The general login command is:
-
mariadb -u <username> -p <database_name>
-
-
The database_name is optional. It will prompt a password input field.
-
Using mariadb as root, create users with their respective database if needed with the following queries:
-
MariaDB> CREATE USER '<username>'@'localhost' IDENTIFIED BY '<password>';
-MariaDB> CREATE DATABASE <database_name>;
-MariaDB> GRANT ALL PRIVILEGES ON <database_name>.* TO '<username>'@'localhost';
-MariaDB> quit
-
-
The database_name will depend on how YOURLS and PrivateBin are configured, that is if the services use a separate database and/or table prefixes are used.
PHP is a general-purpose scripting language that is usually used for web development, which was supposed to be ass for a long time but it seems to be a misconseption from the old times.
The default configuration file is self explanatory, it is located at /etc/webapps/yourls/config.php. Make sure to correctly set the user/database YOURLS will use and either create a cookie or get one from URL provided.
-
It is important to change the $yours_user_passwords variable, YOURLS will hash the passwords on login so it is not stored in plaintext. Password hashing can be disabled with:
-
define( 'YOURLS_NO_HASH_PASSWORD', true );
-
-
I also changed the “shortening method” to 62 to include more characters:
-
define( 'YOURLS_URL_CONVERT', 62 );
-
-
The $yourls_reserved_URL variable will need more blacklisted words depending on the use-case. Make sure the YOURLS_PRIVATE variable is set to true (default) if the service will be exposed to the public.
The admin area is located at https://short.example.com/admin/, login with any of the configured users set with the $yours_user_passwords in the config. Activate plugins by going to the “Manage Plugins” page (located at the top left) and clicking in the respective “Activate” button by hovering the “Actin” column, as shown below:
-
-
I personally activated the “Random ShortURLs” and “Allow Hyphens in Short URLs”. Once the “Random ShortURLs” plugin is activated it can be configured by going to the “Random ShortURLs Settings” page (located at the top left, right below “Manage Plugins”), only config available is “Random Keyword Length”.
-
The main admin area can be used to manually shorten any link provided, by using the automatic shortening or by providing a custom short URL.
-
Finally, the “Tools” page (located at the top left) conains the signature token, used for YOURLS: Passwordless API as well as useful bookmarklets for URL shortening while browsing.
The most important changes needed are basepath according to the privatebin URL and the [model] and [model_options] to use MySQL instead of plain filesystem files:
-
[model]
-; example of DB configuration for MySQL
-class = Database
-[model_options]
-dsn = "mysql:host=localhost;dbname=privatebin;charset=UTF8"
-tbl = "privatebin_" ; table prefix
-usr = "privatebin"
-pwd = "<password>"
-opt[12] = true ; PDO::ATTR_PERSISTENT
-
-
Any other [model] or [model_options] needs to be commented out (for example, the default filesystem setting).
I recommend creating a separate user for privatebin in yourls by modifying the $yours_user_passwords variable in yourls config file. Then login with this user and get the signature from the “Tools” section in the admin page, for more: YOURLS: Passwordless API.
-
For a “private” yourls installation (that needs username/pasword), set urlshortener:
Restart the nginx service for changes to take effect:
-
systemctl restart nginx.service
-
]]>
-
-
- Set up a media server with Jellyfin, Sonarr and Radarr
- https://blog.luevano.xyz/a/jellyfin_server_with_sonarr_radarr.html
- https://blog.luevano.xyz/a/jellyfin_server_with_sonarr_radarr.html
- Mon, 24 Jul 2023 04:30:14 GMT
- Code
- English
- Server
- Tools
- Tutorial
- How to set up a media server with Jellyfin, Sonarr and Radarr, on Arch. With Bazarr, too.
- Second part of my self hosted media server. This is a direct continuation of Set up qBitTorrent with Jackett for use with Starr apps, which will be mentioned as “first part” going forward. Sonarr, Radarr, Bazarr (Starr apps) and Jellyfin setups will be described in this part. Same introduction applies to this entry, regarding the use of documentation and configuration.
-
Everything here is performed in arch btw and all commands should be run as root unless stated otherwise.
-
Kindly note that I do not condone the use of BitTorrent for illegal activities. I take no responsibility for what you do when setting up anything shown here. It is for you to check your local laws before using automated downloaders such as Sonarr and Radarr.
Radarr is a movie collection manager that can be used to download movies via torrents. This is actually a fork of Sonarr, so they’re pretty similar, I just wanted to set up movies first.
-
Install from the AUR with yay:
-
yay -S radarr
-
-
Add the radarr user to the servarr group:
-
gpasswd -a radarr servarr
-
-
The default port that Radarr uses is 7878 for http (the one you need for the reverse proxy).
This will start the service and create the default configs under /var/lib/radarr. You need to change the URLBase as the reverse proxy is under a subdirectory (/radarr). Edit /var/lib/radarr/config.xml:
-
...
-<UrlBase>/radarr</UrlBase>
-...
-
-
Then restart the radarr service:
-
systemctl restart radarr.service
-
-
Now https://isos.yourdomain.com/radarr is accessible. Secure the instance right away by adding authentication under Settings -> General -> Security. I added the “Forms” option, just fill in the username and password then click on save changes on the top left of the page. You can restart the service again and check that it asks for login credentials.
This is personal preference and it dictates your preferred file sizes. You can follow TRaSH: Quality settings to maximize the quality of the downloaded content and restrict low quality stuff.
-
Personally, I think TRaSH’s quality settings are a bit elitist and first world-y. I’m fine with whatever and the tracker I’m using has the quality I want anyways. I did, however, set it to a minimum of 0 and maximum of 400 for the qualities shown in TRaSH’s guide. Configuring anything below 720p shouldn’t be necessary anyways.
Again, this is also completely a personal preference selection and depends on the quality and filters you want. My custom format selections are mostly based on TRaSH: HD Bluray + WEB quality profile.
-
The only Unwanted format that I’m not going to use is the Low Quality (LQ) as it blocks one of the sources I’m using to download a bunch of movies. The reasoning behind the LQ custom format is that these release groups don’t care much about quality (they keep low file sizes) and name tagging, which I understand but I’m fine with this as I can upgrade movies individually whenever I want (I want a big catalog of content that I can quickly watch).
As mentioned in Custom Formats and Quality this is completly a personal preference. I’m going to go for “Low Quality” downloads by still following some of the conventions from TRaSH. I’m using the TRaSH: HD Bluray + WEB quality profile with the exclusion of the LQ profile.
-
I set the name to “HD Bluray + WEB”. I’m also not upgrading the torrents for now. Language set to “Original”.
Pretty straight forward, just click on the giant “+” button and click on the qBitTorrent option. Then configure:
-
-
Name: can be anything, just an identifier.
-
Enable: enable it.
-
Host: use 127.0.0.1. For some reason I can’t make it work with the reverse proxied qBitTorrent.
-
Port: the port number you chose, 30000 in my case.
-
Url Base: leave blank as even though you have it exposed under /qbt, the service itself is not.
-
Username: the Web UI username, admin by default.
-
Password: the Web UI username, adminadmin by default (you should’ve changed it if you have the service exposed).
-
Category: movies.
-
-
Everything else can be left as default, but maybe change Completed Download Handling if you’d like. Same goes for the general Failed Download Handling download clients’ option.
Also easy to set up, also just click on the giant “+” button and click on the custom Torznab option (you can also use the preset -> Jackett Torznab option). Then configure:
-
-
Name: can be anything, just an identifier. I like to do “Jackett - INDEXER”, where “INDEXER” is just an identifier.
-
URL: http://127.0.0.1:9117/jack/api/v2.0/indexers/YOURINDEXER/results/torznab/, where YOURINDEXER is specific to each indexer (yts, nyaasi, etc.). Can be directly copied from the indexer’s “Copy Torznab Feed” button on the Jackett Web UI.
-
API Path: /api, leave as is.
-
API Key: this can be found at the top right corner in Jackett’s Web UI.
-
Categories: which categories to use when searching, these are generic categories until you test/add the indexer. After you add the indexer you can come back and select your prefered categories (like just toggling the movies categories).
-
Tags: I like to add a tag for the indexer name like yts or nyaa. This is useful to control which indexers to use when adding new movies.
-
-
Everything else on default. Download Client can also be set, which can be useful to keep different categories per indexer or something similar. Seed Ratio and Seed Time can also be set and are used to manage when to stop the torrent, this can also be set globally on the qBitTorrent Web UI, this is a personal setting.
You can now start to download content by going to Movies -> Add New. Basically just follow the Radarr: How to add a movie guide. The screenshots from the guide are a bit outdated but it contains everything you need to know.
-
I personally use:
-
-
Monitor: Movie Only.
-
Minimum Availability: Released.
-
Quiality Profile: “HD Bluray + WEB”, the one configured in this entry.
-
Tags: the indexer name I want to use to download the movie, usually just yts for me (remember this is a “LQ” release group, so if you have that custom format disable it) as mentioned in Indexers. If you don’t specify a tag it will only use indexers that don’t have a tag set.
-
Start search for missing movie: toggled on. Immediatly start searching for the movie and start the download.
-
-
Once you click on “Add Movie” it will add it to the Movies section and start searching and selecting the best torrent it finds, according to the “filters” (quality settings, profile and indexer(s)).
-
When it selects a torrent it sends it to qBitTorrent and you can even go ahead and monitor it over there. Else you can also monitor at Activity -> Queue.
-
After the movie is downloaded and processed by Radarr, it will create the appropriate hardlinks to the media/movies directory, as set in First part: Directory structure.
Sonarr is a TV series collection manager that can be used to download series via torrents. Most of the install process, configuration and whatnot is going to be basically the same as with Radarr.
-
Install from the AUR with yay:
-
yay -S sonarr
-
-
Add the sonarr user to the servarr group:
-
gpasswd -a sonarr servarr
-
-
The default port that Radarr uses is 8989 for http (the one you need for the reverse proxy).
This will start the service and create the default configs under /var/lib/sonarr. You need to change the URLBase as the reverse proxy is under a subdirectory (/sonarr). Edit /var/lib/sonarr/config.xml:
-
...
-<UrlBase>/sonarr</UrlBase>
-...
-
-
Then restart the sonarr service:
-
systemctl restart sonarr.service
-
-
Now https://isos.yourdomain.com/sonarr is accessible. Secure the instance right away by adding authentication under Settings -> General -> Security. I added the “Forms” option, just fill in the username and password then click on save changes on the top left of the page. You can restart the service again and check that it asks for login credentials.
Similar to Radarr: Quality this is personal preference and it dictates your preferred file sizes. You can follow TRaSH: Quality settings to maximize the quality of the downloaded content and restrict low quality stuff.
-
Will basically do the same as in Radarr: Quality: set minimum of 0 and maximum of 400 for everything 720p and above.
This is a bit different than with Radarr, the way it is configured is by setting “Release profiles”. I took the profiles from TRaSH: WEB-DL Release profile regex. The only possible change I’ll do is disable the Low Quality Groups and/or the “Golden rule” filter (for x265 encoded video).
-
For me it ended up looking like this:
-
-
But yours can differ as its mostly personal preference. For the “Quality profile” I’ll be using the default “HD-1080p” most of the time, but I also created a “HD + WEB (720/1080)” which works best for some.
Exactly the same as with Radarr: Download clients only change is the category from movies to tv (or whatever you want), click on the giant “+” button and click on the qBitTorrent option. Then configure:
-
-
Name: can be anything, just an identifier.
-
Enable: enable it.
-
Host: use 127.0.0.1.
-
Port: the port number you chose, 30000 in my case.
-
Url Base: leave blank as even though you have it exposed under /qbt, the service itself is not.
-
Username: the Web UI username, admin by default.
-
Password: the Web UI username, adminadmin by default (you should’ve changed it if you have the service exposed).
-
Category: tv.
-
-
Everything else can be left as default, but maybe change Completed Download Handling if you’d like. Same goes for the general Failed Download Handling download clients’ option.
Also exactly the same as with Radarr: Indexers, click on the giant “+” button and click on the custom Torznab option (this doesn’t have the Jackett preset). Then configure:
-
-
Name: can be anything, just an identifier. I like to do “Jackett - INDEXER”, where “INDEXER” is just an identifier.
-
URL: http://127.0.0.1:9117/jack/api/v2.0/indexers/YOURINDEXER/results/torznab/, where YOURINDEXER is specific to each indexer (eztv, nyaasi, etc.). Can be directly copied from the indexer’s “Copy Torznab Feed” button on the Jackett Web UI.
-
API Path: /api, leave as is.
-
API Key: this can be found at the top right corner in Jackett’s Web UI.
-
Categories: which categories to use when searching, these are generic categories until you test/add the indexer. After you add the indexer you can come back and select your prefered categories (like just toggling the TV categories).
-
Tags: I like to add a tag for the indexer name like eztv or nyaa. This is useful to control which indexers to use when adding new series.
-
-
Everything else on default. Download Client can also be set, which can be useful to keep different categories per indexer or something similar. Seed Ratio and Seed Time can also be set and are used to manage when to stop the torrent, this can also be set globally on the qBitTorrent Web UI, this is a personal setting.
Almost the same as with Radarr: Download content, but I’ve been personally selecting the torrents I want to download for each season/episode so far, as the indexers I’m using are all over the place and I like consistencies. Will update if I find a (near) 100% automation process, but I’m fine with this anyways as I always monitor that everything is going fine.
-
Add by going to Series -> Add New. Basically just follow the Sonarr: Library add new guide. Adding series needs a bit more options that movies in Radarr, but it’s straight forward.
-
I personally use:
-
-
Monitor: All Episodes.
-
Quiality Profile: “HD + WEB (720/1080)”. This depends on what I want for that how, lately I’ve been experimenting with this one.
-
Series Type: Standard. For now I’m just downloading shows, but it has an Anime option.
-
Tags: the “indexer_name” I want to use to download the movie, I’ve been using all indexers so I just use all tags as I’m experimenting and trying multiple options.
-
Season Folder: enabled. I like as much organization as possible.
-
Start search for missing episodes: disabled. Depends on you, due to my indexers, I prefer to check manually the season packs, for example.
-
Start search for cutoff unmet episodes: disabled. Honestly don’t really know what this is.
-
-
Once you click on “Add X” it will add it to the Series section and will start as monitored. So far I haven’t noticed that it immediately starts downloading (because of the “Start search for missing episodes” setting) but I always click on unmonitor the series, so I can manually check (again, due to the low quality of my indexers).
-
When it automatically starts to download an episode/season it will send it to qBitTorrent and you can monitor it over there. Else you can also monitor at Activity -> Queue. Same thing goes if you download manually each episode/season via the interactive search.
-
To interactively search episodes/seasons go to Series and then click on any series, then click either on the interactive search button for the episode or the season, it is an icon of a person as shown below:
-
-
Then it will bring a window with the search results, where it shows the indexer it got the result from, the size of the torrent, peers, language, quality, the score it received from the configured release profiles an alert in case that the torrent is “bad” and the download button to manually download the torrent you want. An example shown below:
-
-
After the movie is downloaded and processed by Sonarr, it will create the appropriate hardlinks to the media/tv directory, as set in Directory structure.
Jellyfin is a media server “manager”, usually used to manage and organize video content (movies, TV series, etc.) which could be compared with Plex or Emby for example (take them as possible alternatives).
-
Install from the AUR with yay:
-
yay -S jellyfin-bin
-
-
I’m installing the pre-built binary instead of building it as I was getting a lot of errors and the server was even crashing. You can try installing jellyfin instead.
-
Add the jellyfin user to the servarr group:
-
gpasswd -a jellyfin servarr
-
-
You can already start/enable the jellyfin.service which will start at http://127.0.0.1:8096/ by default where you need to complete the initial set up. But let’s create the reverse proxy first then start everything and finish the set up.
I’m going to have my jellyfin instance under a subdomain with an nginx reverse proxy as shown in the Arch wiki. For that, create a jellyfin.conf at the usual sites-<available/enabled> path for nginx:
-
server {
- listen 80;
- server_name jellyfin.yourdomain.com; # change accordingly to your wanted subdomain and domain name
- set $jellyfin 127.0.0.1; # jellyfin is running at localhost (127.0.0.1)
-
- # Security / XSS Mitigation Headers
- add_header X-Frame-Options "SAMEORIGIN";
- add_header X-XSS-Protection "1; mode=block";
- add_header X-Content-Type-Options "nosniff";
-
- # Content Security Policy
- # See: https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
- # Enforces https content and restricts JS/CSS to origin
- # External Javascript (such as cast_sender.js for Chromecast) must be whitelisted.
- add_header Content-Security-Policy "default-src https: data: blob: http://image.tmdb.org; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' https://www.gstatic.com/cv/js/sender/v1/cast_sender.js https://www.youtube.com blob:; worker-src 'self' blob:; connect-src 'self'; object-src 'none'; frame-ancestors 'self'";
-
- location = / {
- return 302 https://$host/web/;
- }
-
- location / {
- # Proxy main Jellyfin traffic
- proxy_pass http://$jellyfin:8096;
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-Protocol $scheme;
- proxy_set_header X-Forwarded-Host $http_host;
-
- # Disable buffering when the nginx proxy gets very resource heavy upon streaming
- proxy_buffering off;
- }
-
- # location block for /web - This is purely for aesthetics so /web/#!/ works instead of having to go to /web/index.html/#!/
- location = /web/ {
- # Proxy main Jellyfin traffic
- proxy_pass http://$jellyfin:8096/web/index.html;
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-Protocol $scheme;
- proxy_set_header X-Forwarded-Host $http_host;
- }
-
- location /socket {
- # Proxy Jellyfin Websockets traffic
- proxy_pass http://$jellyfin:8096;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection "upgrade";
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-Protocol $scheme;
- proxy_set_header X-Forwarded-Host $http_host;
- }
-}
-
Similarly to the isos subdomain, that will autodetect the new subdomain and extend the existing certificate(s). Restart the nginx service for changes to take effect:
Then navigate to https://jellyfin.yourdomain.com and either continue with the set up wizard if you didn’t already or continue with the next steps to configure your libraries.
-
The initial setup wizard makes you create an user (will be the admin for now) and at least one library, though these can be done later. For more check Jellyfin: Quick start.
-
Remember to use the configured directory as mentioned in Directory structure. Any other configuration (like adding users or libraries) can be done at the dashboard: click on the 3 horizontal lines on the top left of the Web UI then navigate to Administration -> Dashboard. I didn’t configure much other than adding a couple of users for me and friends, I wouldn’t recommend using the admin account to watch (personal preference).
-
Once there is at least one library it will show at Home along with the latest movies (if any) similar to the following (don’t judge, these are just the latest I added due to friend’s requests):
-
-
And inside the “Movies” library you can see the whole catalog where you can filter or just scroll as well as seeing Suggestions (I think this starts getting populated after a while) and Genres:
You can also install/activate plugins to get extra features. Most of the plugins you might want to use are already available in the official repositories and can be found in the “Catalog”. There are a lot of plugins that are focused around anime and TV metadata, as well as an Open Subtitles plugin to automatically download missing subtitles (though this is managed with Bazarr).
-
To activate plugins click on the 3 horizontal lines on the top left of the Web UI then navigate to Administration -> Dashboard -> Advanced -> Plugins and click on the Catalog tab (top of the Web UI). Here you can select the plugins you want to install. By default only the official ones are shown, for more you can add more repositories.
-
The only plugin I’m using is the “Playback Reporting”, to get a summary of what is being used in the instance. But I might experiment with some anime-focused plugins when the time comes.
Although not recommended and explicitly set to not download any x265/HEVC content (by using the Golden rule) there might be cases where the only option you have is to download such content. If that is the case and you happen to have a way to do Hardware Acceleration, for example by having an NVIDIA graphics card (in my case) then you should enable it to avoid using lots of resources on your system.
-
Using hardware acceleration will leverage your GPU to do the transcoding and save resources on your CPU. I tried streaming x265 content and it basically used 70-80% on all cores of my CPU, while on the other hand using my GPU it used the normal amount on the CPU (70-80% on a single core).
-
This will be the steps to install on an NVIDIA graphics card, specifically a GTX 1660 Ti. But more info and guides can be found at Jellyfin: Hardware Acceleration for other acceleration methods.
Ensure you have the NVIDIA drivers and utils installed. I’ve you’ve done this in the past then you can skip this part, else you might be using the default nouveau drivers. Follow the next steps to set up the NVIDIA drivers, which basically is a summary of NVIDIA: Installation for my setup, so double check the wiki in case you have an older NVIDIA graphics card.
-
Install the nvidia and nvidia-utils packages:
-
pacman -S nvidia nvidia-utils
-
-
Modify /etc/mkinitcpio.conf to remove kms from the HOOKS array. It should look like this (commented line is how it was for me before the change):
This provides the jellyfin-ffmpeg executable, which is necessary for Jellyfin to do hardware acceleration, it needs to be this specific one.
-
Then in the Jellyfin go to the transcoding settings by clicking on the 3 horizontal lines on the top left of the Web UI and navigating to Administration -> Dashboard -> Playback -> Transcoding and:
-
-
Change the Hardware acceleration from “None” to “Nvidia NVENC”.
-
Some other options will pop up, just make sure you enable “HEVC” (which is x265) in the list of Enable hardware encoding for:.
-
Scroll down and specify the ffmpeg path, which is /usr/lib/jellyfin-ffmpeg/ffmpeg.
-
-
Don’t forget to click “Save” at the bottom of the Web UI, it will ask if you want to enable hardware acceleration.
Bazarr is a companion for Sonarr and Radarr that manages and downloads subtitles.
-
Install from the AUR with yay:
-
yay -S bazarr
-
-
Add the bazarr user to the servarr group:
-
gpasswd -a bazarr servarr
-
-
The default port that Bazarr uses is 6767 for http (the one you need for the reverse proxy), and it has pre-configured the default ports for Radarr and Sonarr.
Add the following setting in the server block of the isos.conf:
-
server {
- # server_name and other directives
- ...
-
- # Increase http2 max sizes
- large_client_header_buffers 4 16k;
-
- # some other blocks like location blocks
- ...
-}
-
-
Then add the following location blocks in the isos.conf, where I’ll keep it as /bazarr/:
-
location /bazarr/ {
- proxy_pass http://127.0.0.1:6767/bazarr/; # change port if needed
- proxy_http_version 1.1;
-
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header Host $http_host;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection "Upgrade";
-
- proxy_redirect off;
-}
-# Allow the Bazarr API through if you enable Auth on the block above
-location /bazarr/api {
- auth_request off;
- proxy_pass http://127.0.0.1:6767/bazarr/api;
-}
-
-
This is taken from Bazarr: Reverse proxy help. Restart the nginx service for the changes to take effect:
This will start the service and create the default configs under /var/lib/bazarr. You need to change the base_url for the necessary services as they’re running under a reverse proxy and under subdirectories. Edit /var/lib/bazarr/config/config.ini:
Now https://isos.yourdomain.com/bazarr is accessible. Secure the instance right away by adding authentication under Settings -> General -> Security. I added the “Forms” option, just fill in the username and password then click on save changes on the top left of the page. You can restart the service again and check that it asks for login credentials. I also disabled Settings -> General -> Updates -> Automatic.
This doesn’t require much thinking and its up to personal preference, but I’ll list the ones I added:
-
-
OpenSubtitles.com: requires an account (the .org option is deprecated).
-
For a free account it only lets you download around 20 subtitles per day, and they contain ads. You could pay for a VIP account ($3 per month) and that will give you 1000 subtitles per day and no ads. But if you’re fine with 20 ads per day you can try to get rid of the ads by running an automated script. Such option can be found at brianspilner01/media-server-scripts: sub-clean.sh.
I’ve tested this setup for the following languages (with all default settings as stated in the guides):
-
-
English
-
Spanish
-
-
I tried with “Latin American Spanish” but they’re hard to find, those two work pretty good.
-
None of these require an Anti-Captcha account (which is a paid service), but I created one anyways in case I need it. Though you need to add credits to it (pretty cheap though) if you ever use it.
]]>
-
-
- Set up qBitTorrent with Jackett for use with Starr apps
- https://blog.luevano.xyz/a/torrenting_with_qbittorrent.html
- https://blog.luevano.xyz/a/torrenting_with_qbittorrent.html
- Mon, 24 Jul 2023 02:06:24 GMT
- Code
- English
- Server
- Tools
- Tutorial
- How to set up a torrenting solution with qBitTorrent in preparation for a media server with Jellyfin and Starr apps, on Arch. With Jackett and flaresolverr, too.
- Riding on my excitement of having a good internet connection and having setup my home server now it’s time to self host a media server for movies, series and anime. I’ll setup qBitTorrent as the downloader, Jackett for the trackers, the Starr apps for the automatic downloading and Jellyfin as the media server manager/media viewer. This was going to be a single entry but it ended up being a really long one so I’m splitting it, this being the first part.
-
I’ll be exposing my stuff on a subdomain only so I can access it while out of home and for SSL certificates (not required), but shouldn’t be necessary and instead you can use a VPN (how to set up). For your reference, whenever I say “Starr apps” (*arr apps) I mean the family of apps such as Sonarr, Radarr, Bazarr, Readarr, Lidarr, etc..
-
Most of my config is based on the TRaSH-Guides (will be mentioned as “TRaSH” going forward). Specially get familiar with the TRaSH: Native folder structure and with the TRaSH: Hardlinks and instant moves. Will also use the default configurations based on the respective documentation for each Starr app and service, except when stated otherwise.
-
Everything here is performed in arch btw and all commands should be run as root unless stated otherwise.
-
Kindly note that I do not condone the use of torrenting for illegal activities. I take no responsibility for what you do when setting up anything shown here. It is for you to check your local laws before using automated downloaders such as Sonarr and Radarr.
The specific programs are mostly recommendations, if you’re familiar with something else or want to change things around, feel free to do so but everything will be written with them in mind.
-
If you want to expose to a (sub)domain, then similar to my early tutorial entries (specially the website for the reverse proxy plus certificates):
An A (and/or AAAA) or a CNAME for isos (or whatever you want to call it).
-
For automation software (qBitTorrent, Jackett, Starr apps, etc.). One subdomain per service can be used instead.
-
-
-
-
Note: I’m using the explicit 127.0.0.1 ip instead of localhost in the reverse proxies/services config as localhost resolves to ipv6 sometimes which is not configured on my server correctly. If you have it configured you can use localhost without any issue.
The desired behaviour is: set servarr as group ownership, set write access to group and whenever a new directory/file is created, inherit these permission settings. servarr is going to be a service user and I’ll use the root of a mounted drive at /mnt/a.
-
-
Create a service user called servarr (it could just be a group, too):
Jackett is a “proxy server” (or “middle-ware”) that translates queries from apps (such as the Starr apps in this case) into tracker-specific http queries. Note that there is an alternative called Prowlarr that is better integrated with most if not all Starr apps, requiring less maintenance; I’ll still be sticking with Jackett, though.
-
Install from the AUR with yay:
-
yay -S jackett
-
-
I’ll be using the default 9117 port, but change accordingly if you decide on another one.
I’m going to have most of the services under the same subdomain, with different subdirectories. Create the config file isos.conf at the usual sites-available/enabled path for nginx:
-
server {
- listen 80;
- server_name isos.yourdomain.com;
-
- location /jack { # you can change this to jackett or anything you'd like, but it has to match the jackett config on the next steps
- proxy_pass http://127.0.0.1:9117; # change the port according to what you want
-
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-Host $http_host;
- proxy_redirect off;
- }
-}
-
-
This is the basic reverse proxy config as shown in Jackett: Running Jackett behind a reverse proxy. The rest of the services will be added under different location block on their respective steps.
That will automatically detect the new subdomain config and create/extend your existing certificate(s). Restart the nginx service for changes to take effect:
It will autocreate the default configuration under /var/lib/jackett/ServerConfig.json, which you need to edit at least to change the BasePathOverride to match what you used in the nginx config:
Also modify the Port if you changed it. Restart the jackett service:
-
systemctl restart jackett.service
-
-
It should now be available at https://isos.yourdomain.com/jack. Add an admin password right away by scroll down and until the first config setting; don’t forget to click on “Set Password”. Change any other config you want from the Web UI, too (you’ll need to click on the blue “Apply server settings” button).
-
Note that you need to set the “Base URL override” to http://127.0.0.1:9117 (or whatever port you used) so that the “Copy Torznab Feed” button works for each indexer.
For Jackett, an indexer is just a configured tracker for some of the commonly known torrent sites. Jackett comes with a lot of pre-configured public and private indexers which usually have multiple URLs (mirrors) per indexer, useful when the main torrent site is down. Some indexers come with extra features/configuration depending on what the site specializes on.
-
To add an indexer click on the “+ Add Indexer” at the top of the Web UI and look for indexers you want, then click on the “+” icon on the far-most right for each indexer or select the ones you want (clicking on the checkbox on the far-most left of the indexer) and scroll all the way to the bottom to click on “Add Selected”. They then will show as a list with some available actions such as “Copy RSS Feed”, “Copy Torznab Feed”, “Copy Potato Feed”, a button to search, configure, delete and test the indexer, as shown below:
-
-
You can manually test the indexers by doing a basic search to see if they return anything, either by searching on each individual indexer by clicking on the magnifying glass icon on the right of the indexer or clicking on “Manual Search” button which is next to the “+ Add Indexer” button at the top right.
-
Explore each indexer’s configuration in case there is stuff you might want to change.
FlareSolverr is used to bypass certain protection that some torrent sites have. This is not 100% necessary and only needed for some trackers sometimes, it also doesn’t work 100%.
-
You could install from the AUR with yay:
-
yay -S flaresolverr-bin
-
-
At the time of writing, the flaresolverr package didn’t work for me because of python-selenium. flaresolverr-bin was updated around the time I was writing this, so that is what I’m using and what’s working fine so far, it contains almost everything needed (it has self contained libraries) except for xfvb.
-
Install dependencies via pacman:
-
pacman -S xorg-server-xvfb
-
-
You can now start/enable the flaresolverr.service:
Verify that the service started correctly by checking the logs:
-
journalctl -fxeu flaresolverr
-
-
It should show “Test successful” and “Serving on http://0.0.0.0:8191” (which is the default). Jackett now needs to be configured by adding http://127.0.0.1:8191 almost at the end in the “FlareSolverr API URL” field, then click on the blue “Apply server settings” button at the beginning of the config section. This doesn’t need to be exposed or anything, it’s just an internal API that Jackett (or anything you want) will use.
qBitTorrent is a fast, stable and light BitTorrent client that comes with many features and in my opinion it’s a really user friendly client and my personal choice for years now. But you can choose whatever client you want, there are more lightweight alternatives such as Transmission.
-
Install the qbittorrent-nox package (“nox” as in “no X server”):
-
pacman -S qbittorrent-nox
-
-
By default the package doesn’t create any (service) user, but it is recommended to have one so you can run the service under it. Create the user, I’ll call it qbittorrent and leave it with the default homedir (/home):
-
useradd -r -m qbittorrent
-
-
Add the qbittorrent user to the servarr group:
-
gpasswd -a qbittorrent servarr
-
-
Decide a port number you’re going to run the service on for the next steps, the default is 8080 but I’ll use 30000; it doesn’t matter much, as long as it matches for all the config. This is the qbittorrent service port, it is used to connect to the instance itself through the Web UI or via API, you also need to open a port for listening to peer connections. Choose any port you want, for example 50000 and open it with your firewall, ufw in my case:
This will start qbittorrent using default config. You need to change the port (in my case to 30000) and set qbittorrent to restart on exit (the Web UI has a close button). I guess this can be done before enabling/starting the service, but either way let’s create a “drop-in” file by “editing” the service:
Which will bring up a file editing mode containing the service unit and a space where you can add/override anything, add:
-
[Service]
-Environment="QBT_WEBUI_PORT=30000" # or whatever port number you want
-Restart=on-success
-RestartSec=5s
-
-
When exiting from the file (if you wrote anything) it will create the override config. Restart the service for changes to take effect (you might be asked to reload the systemd daemon):
You can now head to https://isos.yourdomain.com/qbt/ and login with user admin and password adminadmin (by default). Change the default password right away by going to Tools -> Options -> Web UI -> Authentication. The Web UI is basically the same as the normal desktop UI so if you’ve used it it will feel familiar. From here you can threat it as a normal torrent client and even start using for other stuff other than the specified here.
It should be usable already but you can go further and fine tune it, specially to some kind of “convention” as shown in TRaSH: qBitTorrent basic setup and subsequent qbittorrent guides.
-
I use all the suggested settings by TRaSH, where the only “changes” are for personal paths, ports, and in general connection settings that depend on my setup. The only super important setting I noticed that can affect our setup (meaning what is described in this entry) is the Web UI -> Authentication for the “Bypass authentication for clients on localhost”. This will be an issue because the reverse proxy is accessing qbittorrent via localhost, so this will make the service open to the world, experiment at your own risk.
-
Finally, add categories by following TRaSH: qBitTorrent how to add categories, basically right clicking on Categories -> All (x) (located at the left of the Web UI) and then on “Add category”; I use the same “Category” and “Save Path” (tv and tv, for example), where the “Save Path” will be a subdirectory of the configured global directory for torrents (TRaSH: qBitTorent paths and categories breakdown). I added 3: tv, movies and anime.
Often some of the trackers that come with torrents are dead or just don’t work. You have the option to add extra trackers to torrents either by:
-
-
Automatically add a predefined list on new torrents: configure at Tools -> Options -> BitTorrent, enable the last option “Automatically add these trackers to new downloads” then add the list of trackers. This should only be done on public torrents as private ones might ban you or something.
-
Manually add a list of trackers to individual torrents: configure by selecting a torrent, clicking on Trackers on the bottom of the Web UI, right clicking on an empty space and selecting “Add trackers…” then add the list of trackers.
-
-
On both options, the list of trackers need to have at least one new line in between each new tracker. You can find trackers from the following sources:
Both sources maintain an updated list of trackers. You also might need to enable an advanced option so all the new trackers are contacted (Only first tracker contacted): configure at Tools -> Options -> Advanced -> libtorrent Section and enable both “Always announce to all tiers” and “Always announce to all trackers in a tier”.
]]>
-
-
- Configure system logs on Arch to avoid filled up disk
- https://blog.luevano.xyz/a/arch_logs_flooding_disk.html
- https://blog.luevano.xyz/a/arch_logs_flooding_disk.html
- Thu, 15 Jun 2023 10:22:20 GMT
- Code
- English
- Server
- Short
- Tools
- Tutorial
- How to configure the system logs, mostly journald, from filling up the disk, on Arch.
- It’s been a while since I’ve been running a minimal server on a VPS, and it is a pretty humble VPS with just 32 GB of storage which works for me as I’m only hosting a handful of services. At some point I started noticing that the disk keept filling up on each time I checked.
-
Turns out that out of the box, Arch has a default config for systemd‘s journald that keeps a persistent journal log, but doesn’t have a limit on how much logging is kept. This means that depending on how many services, and how aggresive they log, it can be filled up pretty quickly. For me I had around 15 GB of logs, from the normal journal directory, nginx directory and my now unused prosody instance.
-
For prosody it was just a matter of deleting the directory as I’m not using it anymore, which freed around 4 GB of disk space.
-For journal I did a combination of configuring SystemMaxUse and creating a Namespace for all “email” related services as mentioned in the Arch wiki: systemd/Journal; basically just configuring /etc/systemd/journald.conf (and /etc/systemd/journald@email.conf with the comment change) with:
-
[Journal]
-Storage=persistent
-SystemMaxUse=100MB # 50MB for the "email" Namespace
-
-
And then for each service that I want to use this “email” Namespace I add:
-
[Service]
-LogNamespace=email
-
-
Which can be changed manually or by executing systemctl edit service_name.service and it will create an override file which will be read on top of the normal service configuration. Once configured restart by running systemctl daemon-reload and systemctl restart service_name.service (probably also restart systemd-journald).
-
I also disabled the logging for ufw by running ufw logging off as it logs everything to the kernel “unit”, and I didn’t find a way to pipe its logs to a separate directory. It really isn’t that useful as most of the logs are the normal [UFW BLOCK] log, which is normal. If I need debugging then I’ll just enable that again. Note that you can change the logging level, if you still want some kind of logging.
-
Finally to clean up the nginx logs, you need to install logrotate (pacman -S logrotate) as that is what is used to clean up the nginx log directory. nginx already “installs” a config file for logrotate which is located at /etc/logrotate.d/, I just added a few lines:
Once you’re ok with your config, it’s just a matter of running logrotate -v -f /etc/logrotate.d/nginx which forces the run of the rule for nginx. After this, logrotate will be run daily if you enable the logrotate timer: systemctl enable logrotate.timer.
]]>
-
-
- Set up a manga server with Komga and mangal
- https://blog.luevano.xyz/a/manga_server_with_komga.html
- https://blog.luevano.xyz/a/manga_server_with_komga.html
- Sat, 10 Jun 2023 19:36:07 GMT
- Code
- English
- Server
- Tools
- Tutorial
- How to set up a manga server with Komga as media server and mangal for downloading manga, on Arch. Tachiyomi integration is available thanks to Komga.
- I’ve been wanting to set up a manga media server to hoard some mangas/comics and access them via Tachiyomi, but I didn’t have enough space in my vultr VPS. Now that I have symmetric fiber optic at home and my spare PC to use as a server I decided to go ahead and create one. As always, i use arch btw so these instructions are specifically for it, I’m not sure how easier/harder it is for other distros, I’m just too comfortable with arch honestly.
-
I’m going to run it as an exposed service using a subdomain of my own, so the steps are taking that into account, if you want to run it locally (or on a LAN/VPN) then it is going to be easier/with less steps (you’re on your own). Also, as you might notice I don’t like to use D*ck*r images or anything (ew).
-
At the time of editing this entry (06-28-2023) Komga has already upgraded to v.1.0.0 and it introduces some breaking changes if you already had your instance set up. Read more here. The only change I did here was changing the port to the new default.
-
As always, all commands are run as root unless stated otherwise.
Similar to my early tutorial entries, if you want it as a subdomain:
-
-
An A (and/or AAAA) or a CNAME for komga (or whatever you want).
-
An SSL certificate, if you’re following the other entries (specially the website entry), add a komga.conf and run certbot --nginx (or similar) to extend/create the certificate. More details below: Reverse proxy and SSL certificate.
This is the first time I mention the AUR (and yay) in my entries, so I might as well just write a bit about it.
-
The AUR is the Arch Linux User Repository and it’s basically like an extension of the official one which is supported by the community, the only thing is that it requires a different package manager. The one I use (and I think everyone does, too) is yay, which as far as I know is like a wrapper of pacman.
To install and use yay we need a normal account with sudo access, all the commands related to yay are run as normal user and then it asks for sudo password. Installation its straight forward: clone yay repo and install. Only dependencies are git and base-devel:
-
Install dependencies:
-
sudo pacman -S git base-devel
-
-
Clone yay and install it (I also like to delete the cloned git repo):
-
git clone git@github.com:Jguer/yay.git
-cd yay
-makepkg -si
-cd ..
-sudo rm -r yay
-
yay is used basically the same as pacman with the difference that it is run as normal user (then later requiring sudo password) and that it asks extra input when installing something, such as if we want to build the package from source or if we want to show package diffs.
-
To install a package (for example Komga in this blog entry), run:
As I mentioned in my past entry I had to forkmangal and related repositories to fix/change a few things. Currently the major fix I did in mangal is for the built in MangaDex scraper which had really annoying bug in the chunking of the manga chapter listing.
-
So instad of installing with yay we’ll build it from source. We need to have go installed:
-
pacman -S go
-
-
Then clone my fork of mangal and install it:
-
git clone https://github.com/luevano/mangal.git # not sure if you can use SSH to clone
-cd mangal
-make install # or just `make build` and then move the binary to somewhere in your $PATH
-
-
This will use go install so it will install to a path specified by the go environment variables, for more run go help install. It was installed to $HOME/.local/bin/go/mangal for me because my env vars, then just make sure this is included in your PATH.
-
Check it was correctly installed by running mangal version, which should print something like:
-
▇▇▇ mangal
-
- Version ...
- Git Commit ...
- Build Date ...
- Built By ...
- Platform ...
-
I’m going to do everything with a normal user (manga-dl) which I created just to download manga. So all of the commands will be run without sudo/root privileges.
-
Change some of the configuration options:
-
mangal config set -k downloader.path -v "/mnt/d/mangal" # downloads to current dir by default
-mangal config set -k formats.use -v "cbz" # downloads as pdf by default
-mangal config set -k installer.user -v "luevano" # points to my scrapers repository which contains a few extra scrapers and fixes, defaults to metafates' one; this is important if you're using my fork, don't use otherwise as it uses extra stuff I added
-mangal config set -k logs.write -v true # I like to get logs for what happens
-
-
Note: For testing purposes (if you want to explore mangal) set downloader.path once you’re ready to start to populate the Komga library directory (at Komga: populate manga library).
-
For more configs and to read what they’re for:
-
mangal config info
-
-
Also install the custom Lua scrapers by running:
-
mangal sources install
-
-
And install whatever you want, it picks up the sources/scrapers from the configured repository (installer.<key> config), if you followed, it will show my scrapers.
Before continuing, I gotta say I went through some bullshit while trying to use the custom Lua scrapers that use the headless browser (actually just a wrapper of go-rod/rod, and honestly it is not really a “headless” browser, mangal “documentation” is just wrong). For more on my rant check out my last entry.
-
There is no concrete documentation on the “headless” browser, only that it is automatically set up and ready to use… but it doesn’t install any library/dependency needed. I discovered the following libraries that were missing on my Arch minimal install:
I can’t guarantee that those are all the packages needed, those are the ones I happen to discover (had to fork the lua libs and add some logging because the error message was too fucking generic).
-
These dependencies are probably met by installing either chromedriver or google-chrome from the AUR (for what I could see on the package dependencies).
Download manga using the TUI by selecting the source/scrapper, search the manga/comic you want and then you can select each chapter to download (use tab to select all). This is what I use when downloading manga that already finished publishing, or when I’m just searching and testing out how it downloads the manga (directory name, and manga information).
-
Note that some scrapters will contain duplicated chapters, as they have multiple uploaded chapters from the community, usually for different scanlation groups. This happens a lot with MangaDex.
The inline mode is a single terminal command meant to be used to automate stuff or for more advanced options. You can peek a bit into the “documentation” which honestly it’s ass because it doesn’t explain much. The minimal command for inline according to the mangal help is:
But this will not produce anything because it also needs --source (or set the default using the config key downloader.default_sources) and either --json which basically just does the search and returns the result in json format or --download to actually download whatever is found; I recommend to do --json first to check that the correct manga will be downloaded then do --download.
-
Something not mentioned anywhere is the --manga flag options (found it at the source code), it has 3 available options:
-
-
first: first manga entry found for the search.
-
last: last manga entry found for the search.
-
exact: exact manga title match. This is the one I use.
-
-
Similar to --chapters, there are a few options not explained (that I found at the source code, too). I usually just use all but other options:
-
-
all: all chapters found in the chapter list.
-
first: first chapter found in the chapter list.
-
last: last chapter found in the chapter list
-
[from]-[to]: selector for the chapters found in the chapter list, index starts at 0.
-
If the selectors (from or to) exceed the amount of chapters in the chapterlist it just adjusts to he maximum available.
-
I had to fix this at the source code because if you wanted to to be the last chapter, it did to + 1 and it failed due to index out of range.
-
-
-
@[sub]@: not sure how this works exactly, my understanding is that it’s for “named” chapters.
Search first and make sure my command will pull the manga I want:
-
-
mangal inline --source "Mangapill" --manga "exact" --query "Kimetsu no Yaiba" --json | jq # I use jq to pretty format the output
-
-
-
I make sure the json output contains the correct manga information: name, url, etc..
-
-
-
You can also include the flag --include-anilist-manga to include anilist information (if any) so you can check that the correct anilist id is attached. If the correct one is not attached (and it exists) then you can bind the --query (search term) to a specific anilist id by running:
-
-
mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922
-
-
-
If I’m okay with the outputs, then I change --json for --download to actually download:
Check if the manga is downloaded correctly. I do this by going to my download directory and checking the directory name (I’m picky with this stuff), that all chapters where downloaded, that it includes a correct series.json file and it contains a cover.<img-ext>; this usually means it correctly pulled information from anilist and that it will contain metadata Komga will be able to use.
The straight forward approach for automation is just to bundle a bunch of mangal inline commands in a shell script and schedule it’s execution either via cron or systemd/Timers. But, as always, I overcomplicated/overengineered my approach, which is the following:
-
-
Group manga names per source.
-
Configure anything that should always be set before executing mangal, this includes anilist bindings.
-
Have a way to track the changes/updates on each run.
-
Use that tracker to know where to start downloading chapters from.
-
This is optional, as you can just do --chapters "all" and it will work but I do it mostly to keep the logs/output cleaner/shorter.
Function that handles the download per manga in the list:
-
mangal_src_dl () {
- source_name=$1
- manga_list=$(echo "$2" | tr '|' '\n')
-
- while IFS= read -r line; do
- # By default download all chapters
- chapters="all"
- last_chapter_n=$(grep -e "$line" "$TRACKER_FILE" | cut -d'|' -f2 | grep -v -e '^$' | tail -n 1)
- if [ -n "${last_chapter_n}" ]; then
- chapters="$last_chapter_n-9999"
- echo "Downloading [${last_chapter_n}-] chapters for $line from $source_name..."
- else
- echo "Downloading all chapters for $line from $source_name..."
- fi
- dl_output=$(mangal inline -S "$source_name" -q "$line" -m "exact" -F "$DOWNLOAD_FORMAT" -c "$chapters" -d)
-
- if [ $? -ne 0 ]; then
- echo "Failed to download chapters for $line."
- continue
- fi
-
- line_count=$(echo "$dl_output" | grep -v -e '^$' | wc -l)
- if [ $line_count -gt 0 ]; then
- echo "Downloaded $line_count chapters for $line:"
- echo "$dl_output"
- new_last_chapter_n=$(echo "$dl_output" | tail -n 1 | cut -d'[' -f2 | cut -d']' -f1)
- # manga_name|last_chapter_number|downloaded_chapters_on_this_update|manga_source
- echo "$line|$new_last_chapter_n|$line_count|$source_name" >> $TRACKER_FILE
- else
- echo "No new chapters for $line."
- fi
- done <<< "$manga_list"
-}
-
-
Where $TRACKER_FILE is just a variable holding a path to some file where you can store the tracking and $DOWNLOAD_FORMAT the format for the mangas, for me it’s cbz. Then the usage would be something like mangal_src_dl "Mangapill" "$mangapill", meaning that it is a function call per source.
-
A simpler function without “tracking” would be:
-
mangal_src_dl () {
- source_name=$1
- manga_list=$(echo "$2" | tr '|' '\n')
-
- while IFS= read -r line; do
- echo "Downloading all chapters for $line from $source_name..."
- mangal inline -S "$source_name" -q "$line" -m "exact" -F "$DOWNLOAD_FORMAT" -c "all" -d
- if [ $? -ne 0 ]; then
- echo "Failed to download chapters for $line."
- continue
- fi
- echo "Finished downloading chapters for $line."
- done <<< "$manga_list"
-}
-
-
The tracker file would have a format like follows:
-
# Updated: 06/10/23 10:53:15 AM CST
-Berserk|0392|392|Mangapill
-Dandadan|0110|110|Mangapill
-...
-
-
And note that if you already had manga downloaded and you run the script for the first time, then it will show as if it downloaded everything from the first chapter, but that’s just how mangal works, it will actually just discover downloaded chapters and only download anything missing.
-
Any configuration the downloader/updater might need needs to be done before the mangal_src_dl calls. I like to configure mangal for download path, format, etc.. I found that it is needed to clear the mangal and rod browser cache (headless browser used in some custom sources) from personal experience and from others: mangal#170 and kaizoku#89.
-
Also you should set any anilist binding necessary for the downloading (as the cache was cleared). An example of an anilist binding I had to do is for Mushoku Tensei, as it has both a light novel and manga version, which for me it’s the following binding:
Finally is just a matter of using your prefered way of scheduling, I’ll use systemd/Timers but anything is fine. You could make the downloader script more sophisticated and only running every week on which each manga gets (usually) released but that’s too much work; I’ll just run it once daily probably.
-
A feature I want to add and probably will is sending notifications (probably through email) on a summary for manga downloaded or failed to download so I’m on top of the updates. For now this is good enough and it’s been working so far.
This komga package creates a komga (service) user and group which is tied to the also included komga.service.
-
Configure it by editing /etc/komga.conf:
-
SERVER_PORT=25600
-SERVER_SERVLET_CONTEXT_PATH=/ # this depends a lot of how it's going to be served (domain, subdomain, ip, etc)
-
-KOMGA_LIBRARIES_SCAN_CRON="0 0 * * * ?"
-KOMGA_LIBRARIES_SCAN_STARTUP=false
-KOMGA_LIBRARIES_SCAN_DIRECTORY_EXCLUSIONS='#recycle,@eaDir,@Recycle'
-KOMGA_FILESYSTEM_SCANNER_FORCE_DIRECTORY_MODIFIED_TIME=false
-KOMGA_REMEMBERME_KEY=USE-WHATEVER-YOU-WANT-HERE
-KOMGA_REMEMBERME_VALIDITY=2419200
-
-KOMGA_DATABASE_BACKUP_ENABLED=true
-KOMGA_DATABASE_BACKUP_STARTUP=true
-KOMGA_DATABASE_BACKUP_SCHEDULE="0 0 */8 * * ?"
-
-
My changes (shown above):
-
-
cron schedules.
-
It’s not actually cron but rather a cron-like syntax used by Spring as stated in the Komga config.
If you’re going to run it locally (or LAN/VPN) you can start the komga.service and access it via IP at http://<your-server-ip>:<port>(/base_url) as stated at Komga: Accessing the web interface, then you can continue with the mangal section, else continue with the next steps for the reverse proxy and certificate.
Create the reverse proxy configuration (this is for nginx). In my case I’ll use a subdomain, so this is a new config called komga.conf at the usual sites-available/enabled path:
-
server {
- listen 80;
- server_name komga.yourdomain.com; # change accordingly to your wanted subdomain and domain name
-
- location / {
- proxy_pass http://localhost:25600; # change port if needed
- proxy_http_version 1.1;
-
- proxy_set_header Host $host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Forwarded-Proto $scheme;
-
- proxy_read_timeout 600s;
- proxy_send_timeout 600s;
- }
-}
-
-
If it’s going to be used as a subdir on another domain then just change the location with /subdir instead of /; be careful with the proxy_pass directive, it has to match what you configured at /etc/komga.conf for the SERVER_SERVLET_CONTEXT_PATH regardless of the /subdir you selected at location.
If using a subdir then the same certificate for the subdomain/domain should work fine and no extra stuff is needed, else if following along me then we can create/extend the certificate by running:
-
certbot --nginx
-
-
That will automatically detect the new subdomain config and create/extend your existing certificate(s). In my case I manage each certificate’s subdomain:
And access the web interface at https://komga.domainname.com which should show the login page for Komga. The first time it will ask to create an account as shown in Komga: Create user account, this will be an admin account. Fill in the email and password (can be changed later). The email doesn’t have to be an actual email, for now it’s just for management purposes.
-
Next thing would be to add any extra account (for read-only/download manga permissions), add/import libraries, etc.. For now I’ll leave it here until we start downloading manga on the next steps.
Creating a library is as simple as creating a directory somewhere and point to it in Komga. The following examples are for my use case, change accordingly. I’ll be using /mnt/d/mangal for my library (as stated in the mangal: configuration section):
-
mkdir /mnt/d/mangal
-
-
Where I chose the name mangal as its the name of the downloader/scrapper, it could be anything, this is just how I like to organize stuff.
-
For the most part, the permissions don’t matter much (as long as it’s readable by the komga user) unless you want to delete some manga, then komga user also needs write permissions.
-
Then just create the library in Komga web interface (the + sign next to Libraries), choose a name “Mangal” and point to the root folder /mnt/d/mangal, then just click Next, Next and Add for the defaults (that’s how I’ve been using it so far). This is well explained at Komga: Libraries.
-
The real important part (for me) is the permissions of the /mnt/d/mangal directory, as I want to have write access for komga so I can manage from the web interface itself. It looks like it’s just a matter of giving ownership to the komga user either for owner or for group (or to all for that matter), but since I’m going to use a separate user to download manga then I need to choose carefully.
The desired behaviour is: set komga as group ownership, set write access to group and whenever a new directory/file is created, inherit these permission settings. I found out via this stack exchange answer how to do it. So, for me:
-
chown manga-dl:komga /mnt/d/mangal # required for group ownership for komga
-chmod g+s /mnt/d/mangal # required for group permission inheritance
-setfacl -d -m g::rwx /mnt/d/mangal # default permissions for group
-setfacl -d -m o::rx /mnt/d/mangal # default permissions for other (as normal, I think this command can be excluded)
-
-
Where manga-dl is the user I created to download manga with. Optionally add -R flag to those 4 commands in case it already has subdirectories/files (this might mess file permissions, but it’s not an issue as far as I konw).
-
Checking that the permissions are set correctly (getfacl /mnt/d/mangal):
You can now start downloading manga using mangal either manually or by running the cron/systemd/Timers and it will be detected by Komga automatically when it scans the library (once every hour according to my config). You can manually scan the library, though, by clicking on the 3 dots to the right of the library name (in Komga) and click on “Scan library files”.
-
Then you can check that the metadata is correct (once the manga is fully indexed and metadata finished building), such as title, summary, chapter count, language, tags, genre, etc., which honestly it never works fine as mangal creates the series.json with the comicId field with an upper case I and Komga expects it to be a lower case i (comicid) so it falls back to using the info from the first chapter. I’ll probably will fix this on mangal side, and see how it goes.
-
So, what I do is manually edit the metadata for the manga, by changing whatever it’s wrong or add what’s missing (I like adding anilist and MyAnimeList links) and then leave it as is. This is up to you.
Just for the record, here is a list of downloaders/scrapers I considered before starting to use mangal:
-
-
kaizoku: NodeJS web server that uses mangal for its “backend” and honestly since I liked mangal so much I should use it, the only reason I don’t is because I’m a bitch and I don’t want to use a D*ck*r image and NodeJS (ew) (in general is pretty bloated in my opinion). If I get tired of my solution with pure mangal I might as well just migrate to it as It’s a more automatic solution.
-
manga-py: Python CLI application that’s a really good option as far as I’ve explored it, I’m just not using it yet as mangal has been really smooth and has everything I need, but will definitely explore it in the future if I need to. The cool thing out of the box is the amount of sources it can scrape from (somethign lacking from mangal).
-
mylar3: Python web server that should be the easier way to download manga with once correctly set up, but I guess I’m too dumb and don’t know how to configure it. Looks like you need to have access to specific private torrent trackers or whatever the other ways to download are, I just couldn’t figure out how to set it up and for public torrent stuff everything will be all over the place, so this was no option for me at the end.
-
-
Others:
-
-
HakuNeku: It looks pretty easy to use and future rich, only thing is that it’s not designed for headless servers, just a normal app. So this is also not an option for me. You could use it on your computer and rsync to your server or use some other means to upload to your server (a nono for me).
-
FMD: No fucking idea on how to use it and it’s not been updated since 2019, just listing it here as an option if it interests you.
-
]]>
-
-
- Updated the how-to entries titles
- https://blog.luevano.xyz/a/updating_creating_entries_titles_to_setup.html
- https://blog.luevano.xyz/a/updating_creating_entries_titles_to_setup.html
- Sat, 03 Jun 2023 03:46:44 GMT
- English
- Short
- Update
- Just a small update on the title for some old entries.
- One of the main reasons I started “blogging” was basically just to document how I set up stuff up so I can reference them later in the future if I ever needed to replicate the steps or just to show somebody, and these entries had helped to do so multiple times. I’ll keep creating these entries but after a while the Creating a title started to feel weird, because we’re not creating anything really, it is just a set up/configuration/how-to/etc. So I think that using Set up a for the titles is better and makes more sense; probably using How to set up a is better for the SEO bullshit.
-
Anyways, so I’ll start using Set up a instead of Creating a and will retroactively change the titles for these entries (by this entry the change should be applied already). This might impact some RSS feeds as they keep up a cache of the feed and might duplicate the entries, heads up if for some reason somebody is using it.
]]>
-
-
- I had to learn Go and Lua the hard way
- https://blog.luevano.xyz/a/learned_go_and_lua_hard_way.html
- https://blog.luevano.xyz/a/learned_go_and_lua_hard_way.html
- Sat, 03 Jun 2023 03:32:17 GMT
- English
- Rant
- Short
- Tools
- Thanks to the issues of a program (mangal) I'm starting to use for my manga media server, I had to learn Go and Lua the hard way so that I can fix it and use it.
- TL;DR: I learned Go and Lua the hard way by forking (for fixing):
-
In the last couple of days I’ve been setting up a Komga server for manga downloaded using metafates/mangal (upcoming set up entry about it) and everything was fine so far until I tried to download One Piece from MangaDex of which mangal has a built-in scraper. Long story short the issue was that MangaDex’s API only allows requesting manga chapters on chunks of 500 and the way that was being handled was completely wrong, specifics can be found on my commit (and the subsequent minor fix commit).
-
I tried to do a PR, but the project hasn’t been active since Feb 2023 (same reason I didn’t even try to do PRs on the other repos) so I closed it and will start working on my own fork, probaly just merging everything Belphemur‘s fork has to offer, as he’s been working on mangal actively. I could probably just fork from him and/or just submit PR requests to him, but I think I saw some changes I didn’t really like, will have to look more into it.
-
Also, while trying to use some of the custom scrapers I ran into issues with the headless chrome explorer implementation where it didn’t close on each manga chapter download, causig my CPU and Mem usage to get maxed out and losing control of the system, so I had to also fork the metafates/mangal-lua-libs and “fixed” (I say fixed because that wasn’t the issue at the end, it was how the custom scrapers where using it, shitty documentation) the issue by adding the browser.Close() function to the headless Lua API (commit) and merged some commits from the original vadv/gopher-lua-libs just to include any features added to the Lua libs needed.
-
Finally I forked the metafates/mangal-scrapers (which I actually forked NotPhantomX‘s fork as they had included more scrapers from some pull requests) to be able to have updated custom Lua scrapers (in which I also fixed the headless bullshit) and use them on my mangal.
-
So, I went into the rabbit hole of manga scrapping because I wanted to set up my Komga server, and more importantly I had to quickly learn Go and Lua (Lua was easier) and I have to say that Go is super convoluted on the module management, all research I did lead me to totally different responses, but it is just because of different Go versions and the year of the responses.
]]>
-
-
- Al fin tengo fibra ópticona
- https://blog.luevano.xyz/a/al_fin_tengo_fibra_opticona.html
- https://blog.luevano.xyz/a/al_fin_tengo_fibra_opticona.html
- Tue, 09 May 2023 08:59:00 GMT
- Rant
- Short
- Spanish
- Update
- Por fin pude contratar fibra óptica simétrica y ya no sufro con el cobre de cierta compañía horrible.
- Quienes me conocen sabrán que llevo como 2 años intentando contratar internet de fibra óptica (específicamente el de T*lm*x). El problema es que nunca había nodos/terminales disponibles o, la verdad, que los técnicos ni querían hacer su jale porque están acostumbrados a que les debes soltar una feria para que te la instalen.
-
Pues bueno, el punto es que me tocó estar aguantando la compañía horrible de *zz*, que sólo tiene cobre; el servicio es malo y a cada rato le suben de precio. Por esto último volví a checar precios de otras compañías para comparar y resulta que me estaban cobrando como $100 - $150 pesos extra con el mismo paquete que ya tenía/tengo. Hasta ahí estaba encabronado, y no ayudó nada que intenté hablar con los muy incompetentes de soporte y no pudieron digamos “resolverme”, porque ¿cómo es posible que siendo cliente de como 5 años ni si quiera pueden avisarme que ya tienen mejores paquetes (que la neta es el mismo paquete pero más barato)?
-
Intenté pedirles que me cambien al paquete actual (mismo todo, única diferencia el precio), pero resulta que me meterían a plazo forzoso. Obviamente esto me prendió un cuete en la cola y por eso chequé con T*lm*x, que a mi sorpresa salía que sí había fibra óptica disponible en mi cantón. Inicié el proceso de portabilidad y me dijeron que en como dos semanas me la instalaban, pero resulta que el basado del técnico me marcó al día siguiente para decirme que YA ESTABA AFUERA DE MI CASA para instalarlo. Gané.
-
Resulta que ahora sí hay nodos/terminales, de hecho instalaron 3 nuevos y están completamente vacíos, me tocó muy buena suerte y el muy basado del técnico se lo aventó en medio segundo sin ningún pedo, no me pidió nada más que detalles de dónde quería el módem. No tenía efectivo si no le soltaba un varo, se portó muy chingón.
]]>
-
-
- Updated pyssg to include pymdvar and the website
- https://blog.luevano.xyz/a/updated_pyssg_pymdvar_and_website.html
- https://blog.luevano.xyz/a/updated_pyssg_pymdvar_and_website.html
- Sat, 06 May 2023 12:39:14 GMT
- English
- Short
- Tools
- Update
- Worked on another update of pyssg which now includes my extension pymdvar and updated the website overall.
- Again, I’ve updated pyssg to add a bit of unit-testing as well as to include my extension pymdvar which is used to convert ${some_variables} into their respective values based on a config file and/or environment variables. With this I also updated a bit of the CSS of the site as well as basically all the entries and base templates, a much needed update (for me, because externally doesn’t look like much). Along with this I also added a “return to top” button, once you scroll enough on the site, a new button appears on the bottom right to get back to the top, also added table of contents to entries taht could use them (as well as a bit of CSS to them).
-
This update took a long time because I had a fundamental issue with how I was managing the “static” website, where I host all assets such as CSS, JS, images, etc.. Because I was using the <base> HTML tag. The issue is that this tag affects everything and there is no “opt-out” on some body tags, meaning that I would have to write the whole URL for all static assets. So I tried looking into changing how the image extension for python-markdown works, so that it includes this “base” URL I needed. But it was too much hassle, so I ended up developing my own extension mentioned earlier. Just as a side note, I noticed that my extension doesn’t cover all my needs, so probably it wont cover yours, if you end up using it just test it out a bit yourself and then go ahead, PRs are welcomed.
-
One thing led to another so I ended up changing a lot of stuff, and with changes comes tireness and eded up leaving the project for a while (again). This also led to not wanting to write or add anything else to the site until I sorted things out. But I’m again reviving it I guess, and up to the next cycle.
-
The next things I’ll be doing are continuing with my @gamedev journey and probably upload some drawings if I feel like doing some.
]]>
-
-
- Rewrote pyssg again
- https://blog.luevano.xyz/a/rewrote_pyssg_again.html
- https://blog.luevano.xyz/a/rewrote_pyssg_again.html
- Tue, 20 Dec 2022 04:31:05 GMT
- English
- Short
- Tools
- Update
- Rewrote pyssg to make it more flexible and to work with YAML configuration files.
- I’ve been wanting to change the way pyssg reads config files and generates HTML files so that it is more flexible and I don’t need to have 2 separate build commands and configs (for blog and art), and also to handle other types of “sites”; because pyssg was built with blogging in mind, so it was a bit limited to how it could be used. So I had to kind of rewritepyssg, and with the latest version I can now generate the whole site and use the same templates for everything, quite neat for my use case.
-
Anyways, so I bought a new domain for all pyssg related stuff, mostly because I wanted somewhere to test live builds while developing, it is of course pyssg.xyz; as of now it is the same template, CSS and scripts that I use here, probably will change in the future. I’ll be testing new features and anything pyssg related stuff.
-
I should start pointing all links to pyssg to the actual site instead of the github repository (or my git repository), but I haven’t decided how to handle everything.
]]>
-
-
- Creating my Go Godot Jam 3 entry using Godot 3.5 devlog 1
- https://blog.luevano.xyz/g/gogodot_jam3_devlog_1.html
- https://blog.luevano.xyz/g/gogodot_jam3_devlog_1.html
- Fri, 10 Jun 2022 09:17:05 GMT
- English
- Gamedev
- Gamejam
- Gdscript
- Godot
- Details on the implementation for the game I created for the Go Godot Jam 3, which theme is Evolution.
- The jam’s theme is Evolution and all the details are listed here. This time I’m logging as I go, so there might be some changes to the script or scenes along the way. I couldn’t actually do this, as I was running out of time. Note that I’m not going to go into much details, the obvious will be ommitted.
-
I wanted to do a Snake clone, and I’m using this jam as an excuse to do it and add something to it. The features include:
-
-
Snakes will pass their stats in some form to the next snakes.
-
Non-grid snake movement. I just hate the grid constraint, so I wanted to make it move in any direction.
-
Depending on the food you eat, you’ll gain new mutations/abilities and the more you eat the more that mutation develops didn’t have time to add this feature, sad.
-
Procedural map creation.
-
-
I created this game using Godot 3.5-rc3. You can find the source code in my GitHub here which at the time of writing this it doesn’t contain any exported files, for that you can go ahead and play it in your browser at itch.io, which you can find below:
Again, similar to the FlappyBird clone I created, I’m using the directory structure I wrote about on Godot project structure with slight modifications to test things out. Also using similar Project settings as those from the FlappyBird clone like the pixel art texture imports, keybindings, layers, etc..
-
I’ve also setup GifMaker, with slight modifications as the AssetLib doesn’t install it correctly and contains unnecessry stuff: moved necessary files to the res://addons directory, deleted test scenes and files in general, and copied the license to the res://docs directory. Setting this up was a bit annoying because the tutorial it’s bad (with all due respect). I might do a separate entry just to explain how to set it up, because I couldn’t find it anywhere other than by inspecting some of the code/scenes. I ended up leaving this disabled in the game as it hit the performance by a lot, but it’s an option I’ll end up researching more.
-
This time I’m also going to be using an Event bus singleton (which I’m going to just call Event) as managing signals was pretty annoying on my last project; as well as a Global singleton for essential stuff so I don’t have to do as many cross references between nodes/scenes.
This is the most challenging part in my opinion as making all the body parts follow the head in a user defined path it’s kinda hard. I tried with like 4-5 options and the one I’m detailing here is the only one that worked as I wanted for me. This time the directory structure I’m using is the following:
The most basic thing is to move the head, this is what we have control of. Create a scene called Head.tscn and setup the basic KinematicBody2D with it’s own Sprite and CollisionShape2D (I used a small circle for the tip of the head), and set the Collision Layer/Mask accordingly, for now just layer = bit 1. And all we need to do, is keep moving the snake forwards and be able to rotate left or right. Created a new script called head.gd attached to the root (KinematicBody2D) and added:
To move other snake parts by following the snake head the only solution I found was to use the Path2D and PathFollow2D nodes. Path2D basically just handles the curve/path that PathFollow2D will use to move its child node; and I say “child node” in singular… as PathFollow2D can only handle one damn child, all the other ones will have weird transformations and/or rotations. So, the next thing to do is to setup a way to compute (and draw so we can validate) the snake’s path/curve.
-
Added the signal snake_path_new_point(coordinates) to the Event singleton and then add the following to head.gd:
-
var _time_elapsed: float = 0.0
-
-# using a timer is not recommended for < 0.01
-func _handle_time_elapsed(delta: float) -> void:
- if _time_elapsed >= Global.SNAKE_POSITION_UPDATE_INTERVAL:
- Event.emit_signal("snake_path_new_point", global_position)
- _time_elapsed = 0.0
- _time_elapsed += delta
-
-
This will be pinging the current snake head position every 0.01 seconds (defined in Global). Now create a new scene called Snake.tscn which will contain a Node2D, a Path2D and an instance of Head as its childs. Create a new script called snake.gd attached to the root (Node2D) with the following content:
-
class_name Snake
-extends Node2D
-
-onready var path: Path2D = $Path
-
-func _ready():
- Event.connect("snake_path_new_point", self, "_on_Head_snake_path_new_point")
-
-
-func _draw() -> void:
- if path.curve.get_baked_points().size() >= 2:
- draw_polyline(path.curve.get_baked_points(), Color.aquamarine, 1, true)
-
-
-func _on_Head_snake_path_new_point(coordinates: Vector2) -> void:
- path.curve.add_point(coordinates)
- # update call is to draw curve as there are new points to the path's curve
- update()
-
-
With this, we’re now populating the Path2D curve points with the position of the snake head. You should be able to see it because of the _draw call. If you run it you should see something like this:
At this point the only thing to do is to add the corresponding next body parts and tail of the snake. To do so, we need a PathFollow2D to use the live-generating Path2D, the only caveat is that we need one of these per body part/tail (this took me hours to figure out, thanks documentation).
-
Create a new scene called Body.tscn with a PathFollow2D as its root and an Area2D as its child, then just add the necessary Sprite and CollisionShap2D for the Area2D, I’m using layer = bit 2 for its collision. Create a new script called generic_segment.gd with the following code:
And this can be attached to the Body‘s root node (PathFollow2D), no extra setup needed. Repeat the same steps for creating the Tail.tscn scene and when attaching the generic_segment.gd script just configure the Type parameter to tail in the GUI (by selecting the node with the script attached and editing in the Inspector).
Now it’s just a matter of handling when to add new body parts in the snake.gd script. For now I’ve only setup for adding body parts to fulfill the initial length of the snake (this doesn’t include the head or tail). The extra code needed is the following:
-
export(PackedScene) var BODY_SEGMENT_NP: PackedScene
-export(PackedScene) var TAIL_SEGMENT_NP: PackedScene
-
-var current_body_segments: int = 0
-var max_body_segments: int = 1
-
-
-func _add_initial_segment(type: PackedScene) -> void:
- if path.curve.get_baked_length() >= (current_body_segments + 1.0) * Global.SNAKE_SEGMENT_SIZE:
- var _temp_body_segment: PathFollow2D = type.instance()
- path.add_child(_temp_body_segment)
- current_body_segments += 1
-
-
-func _on_Head_snake_path_new_point(coordinates: Vector2) -> void:
- path.curve.add_point(coordinates)
- # update call is to draw curve as there are new points to the path's curve
- update()
-
- # add the following lines
- if current_body_segments < max_body_segments:
- _add_initial_segment(BODY_SEGMENT_NP)
- elif current_body_segments == max_body_segments:
- _add_initial_segment(TAIL_SEGMENT_NP)
-
-
Select the Snake node and add the Body and Tail scene to the parameters, respectively. Then when running you should see something like this:
-
-
Now, we need to handle adding body parts after the snake is complete and already moved for a bit, this will require a queue so we can add part by part in the case that we eat multiple pieces of food in a short period of time. For this we need to add some signals: snake_adding_new_segment(type), snake_added_new_segment(type), snake_added_initial_segments and use them when makes sense. Now we need to add the following:
-
var body_segment_stack: Array
-var tail_segment: PathFollow2D
-# didn't konw how to name this, basically holds the current path lenght
-# whenever the add body segment, and we use this stack to add body parts
-var body_segment_queue: Array
-
-
As well as updating _add_initial_segment with the following so it adds the new segment on the specific variable:
Now that it’s just a matter of creating the segment queue whenever a new segment is needed, as well as adding each segment in a loop whenever we have items in the queue and it’s a good distance to place the segment on. These two things can be achieved with the following code:
-
# this will be called in _physics_process
-func _add_new_segment() -> void:
- var _path_length_threshold: float = body_segment_queue[0] + Global.SNAKE_SEGMENT_SIZE
- if path.curve.get_baked_length() >= _path_length_threshold:
- var _removed_from_queue: float = body_segment_queue.pop_front()
- var _temp_body_segment: PathFollow2D = BODY_SEGMENT_NP.instance()
- var _new_body_offset: float = body_segment_stack.back().offset - Global.SNAKE_SEGMENT_SIZE
-
- _temp_body_segment.offset = _new_body_offset
- body_segment_stack.append(_temp_body_segment)
- path.add_child(_temp_body_segment)
- tail_segment.offset = body_segment_stack.back().offset - Global.SNAKE_SEGMENT_SIZE
-
- current_body_segments += 1
-
-
-func _add_segment_to_queue() -> void:
- # need to have the queues in a fixed separation, else if the eating functionality
- # gets spammed, all next bodyparts will be spawned almost at the same spot
- if body_segment_queue.size() == 0:
- body_segment_queue.append(path.curve.get_baked_length())
- else:
- body_segment_queue.append(body_segment_queue.back() + Global.SNAKE_SEGMENT_SIZE)
-
-
With everything implemented and connected accordingly then we can add segments on demand (for testing I’m adding with a key press), it should look like this:
-
-
For now, this should be enough, I’ll add more stuff as needed as I go. Last thing is that after finished testing that the movement felt ok, I just added a way to stop the snake whenever it collides with itself by using the following code (and the signal snake_segment_body_entered(body)) in a main.gd script that is the entry point for the game:
After a while of testing and developing, I noticed that sometimes the head “detaches” from the body when a lot of rotations happen (moving the snake left or right), because of how imprecise the Curve2D is. To do this I just send a signal (snake_rotated) whenever the snake rotates and make a small correction (in generic_segment.gd):
For now I just decided to setup a simple system to see everything works fine. The idea is to make some kind of generic food node/scene and a “food manager” to spawn them, for now in totally random locations. For this I added the following signals: food_placing_new_food(type), food_placed_new_food(type) and food_eaten(type).
-
First thing is creating the Food.tscn which is just an Area2D with its necessary children with an attached script called food.gd. The script is really simple:
-
class_name Food # needed to access Type enum outside of the script, this registers this script as a node
-extends Area2D
-
-enum Type {
- APPLE
-}
-
-var _type_texture: Dictionary = {
- Type.APPLE: preload("res://entities/food/sprites/apple.png")
-}
-
-export(Type) var TYPE
-onready var _sprite: Sprite = $Sprite
-
-
-func _ready():
- connect("body_entered", self, "_on_body_entered")
- _sprite.texture = _type_texture[TYPE]
-
-
-func _on_body_entered(body: Node) -> void:
- Event.emit_signal("food_eaten", TYPE)
- queue_free()
-
-
Then this food_eaten signal is received in snake.gd to add a new segment to the queue.
-
Finally, for the food manager I just created a FoodManager.tscn with a Node2D with an attached script called food_manager.gd. To get a random position:
Which gets the job done, but later I’ll have to add a way to check that the position is valid. And to actually place the food:
-
func _place_new_food() -> void:
- var food: Area2D = FOOD.instance()
- var position: Vector2 = _get_random_pos()
- food.global_position = position
- add_child(food)
-
-
And this is used in _process to place new food whenever needed. For now I added a condition to add food until 10 pieces are in place, and keep adding whenever the food is is lower than 10. After setting everything up, this is the result:
It just happend that I saw a video to create random maps by using a method called random walks, this video was made by NAD LABS: Nuclear Throne Like Map Generation In Godot. It’s a pretty simple but powerful script, he provided the source code from which I based my random walker, just tweaked a few things and added others. Some of the maps than can be generated with this method (already aded some random sprites):
-
-
-
-
It started with just black and white tiles, but I ended up adding some sprites as it was really harsh to the eyes. My implementation is basically the same as NAD LABS‘ with few changes, most importantly: I separated the generation in 2 diferent tilemaps (floor and wall) to have better control as well as wrapped everything in a single scene with a “main” script with the following important functions:
Where get_cells_around is just a function that gets the safe cells around the origin. And this get_valid_map_coords just returns used cells minus the safe cells, to place food. get_centered_world_position is so we can center the food in the tiles.
-
Some signals I used for the world gen: world_gen_walker_started(id), world_gen_walker_finished(id), world_gen_walker_died(id) and world_gen_spawn_walker_unit(location).
The last food algorithm doesn’t check anything related to the world, and thus the food could spawn in the walls and outside the map.
-
First thing is I generalized the food into a single script and added basic food and special food which inherit from base food. The most important stuff for the base food is to be able to set all necessary properties at first:
Where the update_texture needs to be a separate function, because we need to create the food first, set properties, add as a child and then update the sprite; we also need to keep track of the global position, location (in tilemap coordinates) and identifiers for the type of food.
-
Then basic/special food just extend base food, define a Type enum and preloads the necessary textures, for example:
Now, some of the most important change to food_manager.gd is to get an actual random valid position:
-
func _get_random_pos() -> Array:
- var found_valid_loc: bool = false
- var index: int
- var location: Vector2
-
- while not found_valid_loc:
- index = randi() % possible_food_locations.size()
- location = possible_food_locations[index]
- if current_basic_food.find(location) == -1 and current_special_food.find(location) == -1:
- found_valid_loc = true
-
- return [world_generator.get_centered_world_position(location), location]
-
-
Other than that, there are some differences between placing normal and special food (specially the signal they send, and if an extra “special points” property is set). Some of the signals that I used that might be important: food_placing_new_food(type), food_placed_new_food(type, location) and food_eaten(type, location).
I got the idea of saving the current stats (points, max body segments, etc.) in a separate Stats class for easier load/save data. This option I went with didn’t work as I would liked it to work, as it was a pain in the ass to setup and each time a new property is added you have to manually setup the load/save helper functions… so not the best option. This option I used was json but saving a Node directly could work better or using resources (saving tres files).
The load/save function is pretty standard. It’s a singleton/autoload called SavedData with a script that extends from Node called save_data.gd:
-
const DATA_PATH: String = "user://data.save"
-
-var _stats: Stats
-
-
-func _ready() -> void:
- _load_data()
-
-
-# called when setting "stats" and thus saving
-func save_data(stats: Stats) -> void:
- _stats = stats
- var file: File = File.new()
- file.open(DATA_PATH, File.WRITE)
- file.store_line(to_json(_stats.get_stats()))
- file.close()
-
-
-func get_stats() -> Stats:
- return _stats
-
-
-func _load_data() -> void:
- # create an empty file if not present to avoid error while loading settings
- _handle_new_file()
-
- var file = File.new()
- file.open(DATA_PATH, File.READ)
- _stats = Stats.new()
- _stats.set_stats(parse_json(file.get_line()))
- file.close()
-
-
-func _handle_new_file() -> void:
- var file: File = File.new()
- if not file.file_exists(DATA_PATH):
- file.open(DATA_PATH, File.WRITE)
- _stats = Stats.new()
- file.store_line(to_json(_stats.get_stats()))
- file.close()
-
-
It uses json as the file format, but I might end up changing this in the future to something else more reliable and easier to use (Stats class related issues).
For this I created a scoring mechanisms and just called it ScoreManager (score_manager.gd) which just basically listens to food_eaten signal and adds points accordingly to the current Stats object loaded. The main function is:
-
func _on_food_eaten(properties: Dictionary) -> void:
- var is_special: bool = properties["special"]
- var type: int = properties["type"]
- var points: int = properties["points"]
- var special_points: int = properties["special_points"]
- var location: Vector2 = properties["global_position"]
- var amount_to_grow: int
- var special_amount_to_grow: int
-
- amount_to_grow = _process_points(points)
- _spawn_added_score_text(points, location)
- _spawn_added_segment_text(amount_to_grow)
-
- if is_special:
- special_amount_to_grow = _process_special_points(special_points, type)
- # _spawn_added_score_text(points, location)
- _spawn_added_special_segment_text(special_amount_to_grow, type)
- _check_if_unlocked(type)
-
-
Where the most important function is:
-
func _process_points(points: int) -> int:
- var score_to_grow: int = (stats.segments + 1) * Global.POINTS_TO_GROW - stats.points
- var amount_to_grow: int = 0
- var growth_progress: int
- stats.points += points
- if points >= score_to_grow:
- amount_to_grow += 1
- points -= score_to_grow
- # maybe be careful with this
- amount_to_grow += points / Global.POINTS_TO_GROW
- stats.segments += amount_to_grow
- Event.emit_signal("snake_add_new_segment", amount_to_grow)
-
- growth_progress = Global.POINTS_TO_GROW - ((stats.segments + 1) * Global.POINTS_TO_GROW - stats.points)
- Event.emit_signal("snake_growth_progress", growth_progress)
- return amount_to_grow
-
-
Which will add the necessary points to Stats.points and return the amount of new snake segments to grow. After this _spawn_added_score_segment and _spawn_added_segment_text just spawn a Label with the info on the points/segments gained; this is custom UI I created, nothing fancy.
-
Last thing is taht in _process_points there is a check at the end, where if the food eaten is “special” then a custom variation of the last 3 functions are executed. These are really similar, just specific to each kind of food.
-
This ScoreManager also handles the calculation for the game_over signal, to calculte progress, set necessary Stats values and save the data:
-
func _on_game_over() -> void:
- var max_stats: Stats = _get_max_stats()
- SaveData.save_data(max_stats)
- Event.emit_signal("display_stats", initial_stats, stats, mutation_stats)
-
-
-func _get_max_stats() -> Stats:
- var old_stats_dict: Dictionary = initial_stats.get_stats()
- var new_stats_dict: Dictionary = stats.get_stats()
- var max_stats: Stats = Stats.new()
- var max_stats_dict: Dictionary = max_stats.get_stats()
- var bool_stats: Array = [
- "trait_dash",
- "trait_slow",
- "trait_jump"
- ]
-
- for i in old_stats_dict:
- if bool_stats.has(i):
- max_stats_dict[i] = old_stats_dict[i] or new_stats_dict[i]
- else:
- max_stats_dict[i] = max(old_stats_dict[i], new_stats_dict[i])
- max_stats.set_stats(max_stats_dict)
- return max_stats
-
-
Then this sends a signal display_stats to activate UI elements that shows the progression.
-
Naturally, the saved Stats are loaded whenever needed. For example, for the Snake, we load the stats and setup any value needed from there (like a flag to know if any ability is enabled), and since we’re saving the new Stats at the end, then on restart we load the updated one.
I redesigned the snake code (the head, actually) to use the state machine pattern by following this guide which is definitely a great guide, straight to the point and easy to implement.
-
Other than what is shown in the guide, I implemented some important functions in the state_machine.gd script itself, to be used by each of the states as needed:
func _physics_process(delta: float) -> void:
- # state specific code, move_and_slide is called here
- if state.has_method("physics_process"):
- state.physics_process(delta)
-
- handle_slow_speeds()
- player.handle_time_elapsed(delta)
-
-
And now it’s just a matter of implementing the necessary states. I used 4: normal_stage.gd, slow_state.gd, dash_state.gd and jump_state.gd.
-
The normal_state.gd contains what the original head.gd code contained:
-
func physics_process(delta: float) -> void:
- fsm.rotate_on_input()
- fsm.player.velocity = fsm.player.direction * Global.SNAKE_SPEED
- fsm.player.velocity = fsm.player.move_and_slide(fsm.player.velocity)
-
- fsm.slow_down_on_collisions(Global.SNAKE_SPEED_BACKUP)
-
-
-func input(event: InputEvent) -> void:
- if fsm.player.can_dash and event.is_action_pressed("dash"):
- exit("DashState")
- if fsm.player.can_slow and event.is_action_pressed("slow"):
- exit("SlowState")
- if fsm.player.can_jump and event.is_action_pressed("jump"):
- exit("JumpState")
-
-
Here, the exit method is basically to change to the next state. And lastly, I’m only gonna show the dash_state.gd as the other ones are pretty similar:
Where the important parts happen in the enter and exit functions. We need to change the Global.SNAKE_SPEED with the Global.SNAKE_DASH_SPEED on startand start the timer for how long should the dash last. And on the exit we reset the Global.SNAKE_SPEED back to normal. There is probably a better way of updating the Global.SNAKE_SPEED but this works just fine.
-
For the other ones is the same. Only difference with the jump_state.gd is that the collision from head to body is disabled, and no rotation is allowed (by not calling the rotate_on_input function).
I actually didn’t finish this game (as how I visualized it), but I got it in a semi-playable state which is good. My big learning during this jam is the time management that it requires to plan and design a game. I lost a lot of time trying to implement some mechanics because I was facing many issues, because of my lack of practice (which was expected) as well as trying to blog and create the necessary sprites myself. Next time I should just get an asset pack and do something with it, as well as keeping the scope of my game shorter.
]]>
-
-
- Creating a FlappyBird clone in Godot 3.5 devlog 1
- https://blog.luevano.xyz/g/flappybird_godot_devlog_1.html
- https://blog.luevano.xyz/g/flappybird_godot_devlog_1.html
- Sun, 29 May 2022 03:38:43 GMT
- English
- Gamedev
- Gdscript
- Godot
- Since I'm starting to get more into gamedev stuff, I'll start blogging about it just to stay consistent.
- I just have a bit of experience with Godot and with gamedev in general, so I started with this game as it is pretty straight forward. On a high level the main characteristics of the game are:
-
-
Literally just one sprite going up and down.
-
Constant horizontal move of the world/player.
-
If you go through the gap in the pipes you score a point.
-
If you touch the pipes, the ground or go past the “ceiling” you lose.
-
-
The game was originally developed with Godot 4.0 alpha 8, but it didn’t support HTML5 (webassembly) export… so I backported to Godot 3.5 rc1.
-
Note: I’ve updated the game to Godot 4 and documented it on my FlappyBird devlog 2 entry.
-
Not going to specify all the details, only the needed parts and what could be confusing, as the source code is available and can be inspected; also this assumes minimal knowledge of Godot in general. Usually when I mention that a set/change of something it usually it’s a property and it can be found under the Inspector on the relevant node, unless stated otherwise; also, all scripts attached have the same name as the scenes, but in snake_case (scenes/nodes in PascalCase).
-
One thing to note, is that I started writing this when I finished the game, so it’s hard to go part by part, and it will be hard to test individual parts when going through this as everything is depending on each other. For the next devlog, I’ll do it as I go and it will include all the changes to the nodes/scripts as I was finding them, probably better idea and easier to follow.
-
The source code can be found at luevano/flappybirdgodot#godot-3.5 (godot-3.5 branch), it also contains the exported versions for HTML5, Windows and Linux (be aware that the sound might be too high and I’m too lazy to make it configurable, it was the last thing I added on the latest version this is fixed and audio level is configurable now). Playable on itch.io (Godot 4 version):
Since this is just pixel art, the importing settings for textures needs to be adjusted so the sprites don’t look blurry. Go to Project -> Project settings… -> Import defaults and on the drop down select Texture, untick everything and make sure Compress/Mode is set to Lossless.
It’s also a good idea to setup some config variables project-wide. To do so, go to Project -> Project settings… -> General, select Application/config and add a new property (there is a text box at the top of the project settings window) for game scale: application/config/game_scale for the type use float and then click on add; configure the new property to 3.0; On the same window, also add application/config/version as a string, and make it 1.0.0 (or whatever number you want).
-
-
For my personal preferences, also disable some of the GDScript debug warnings that are annoying, this is done at Project -> Project settings… -> General, select Debug/GDScript and toggle off Unused arguments, Unused signal and Return value discarded, and any other that might come up too often and don’t want to see.
-
-
Finally, set the initial window size in Project -> Project settings… -> General, select Display/Window and set Size/Width and Size/Height to 600 and 800, respectively. As well as the Stretch/Mode to viewport , and Stretch/Aspect to keep:
I only used 3 actions (keybindings): jump, restart and toggle_debug (optional). To add custom keybindings (so that the Input.something() API can be used), go to Project -> Project settings… -> Input Map and on the text box write jump and click add, then it will be added to the list and it’s just a matter of clicking the + sign to add a Physical key, press any key you want to be used to jump and click ok. Do the same for the rest of the actions.
Finally, rename the physics layers so we don’t lose track of which layer is which. Go to Project -> Layer Names -> 2d Physics and change the first 5 layer names to (in order): player, ground, pipe, ceiling and score.
For the assets I found out about a pack that contains just what I need: flappy-bird-assets by MegaCrash; I just did some minor modifications on the naming of the files. For the font I used Silver, and for the sound the resources from FlappyBird-N64 (which seems to be taken from 101soundboards.com which the orignal copyright holder is .Gears anyways).
Create the necessary directories to hold the respective assets and it’s just a matter of dragging and dropping, I used directories: res://entities/actors/player/sprites/, res://fonts/, res://levels/world/background/sprites/, res://levels/world/ground/sprites/, res://levels/world/pipe/sprites/, res://sfx/. For the player sprites, the
-FileSystem window looks like this (entities/actor directories are really not necessary):
-
-
It should look similar for other directories, except maybe for the file extensions. For example, for the sfx:
Now it’s time to actually create the game, by creating the basic scenes that will make up the game. The hardest part and the most confusing is going to be the TileMaps, so that goes first.
I’m using a scene called WorldTiles with a Node2D node as root called the same. With 2 different TileMap nodes as children named GroundTileMap and PipeTileMap (these are their own scene); yes 2 different TileMaps because we need 2 different physics colliders (in Godot 4.0 you can have a single TileMap with different physics colliders in it). Each node has its own script. It should look something like this:
-
-
I used the following directory structure:
-
-
To configure the GroundTileMap, select the node and click on (empty) on the TileMap/Tile set property and then click on New TileSet, then click where the (empty) used to be, a new window should open on the bottom:
-
-
Click on the plus on the bottom left and you can now select the specific tile set to use. Now click on the yellow + New Single Tile, activate the grid and select any of the tiles. Should look like this:
-
-
We need to do this because for some reason we can’t change the snap options before selecting a tile. After selecting a random tile, set up the Snap Options/Step (in the Inspector) and set it to 16x16 (or if using a different tile set, to it’s tile size):
-
-
Now you can select the actual single tile. Once selected click on Collision, use the rectangle tool and draw the rectangle corresponding to that tile’s collision:
-
-
Do the same for the other 3 tiles. If you select the TileMap itself again, it should look like this on the right (on default layout it’s on the left of the Inspector):
-
-
The ordering is important only for the “underground tile”, which is the filler ground, it should be at the end (index 3); if this is not the case, repeat the process (it’s possible to rearrange them but it’s hard to explain as it’s pretty weird).
-
At this point the tilemap doesn’t have any physics and the cell size is wrong. Select the GroundTileMap, set the TileMap/Cell/Size to 16x16, the TileMap/Collision/Layer set to bit 2 only (ground layer) and disable any TileMap/Collision/Mask bits. Should look something like this:
-
-
Now it’s just a matter of repeating the same for the pipes (PipeTileMap), only difference is that when selecting the tiles you need to select 2 tiles, as the pipe is 2 tiles wide, or just set the Snap Options/Step to 32x16, for example, just keep the cell size to 16x16.
I added few default ground tiles to the scene, just for testing purposes but I left them there. These could be place programatically, but I was too lazy to change things. On the WorldTiles scene, while selecting the GroundTileMap, you can select the tiles you want to paint with, and left click in the grid to paint with the selected tile. Need to place tiles from (-8, 7) to (10, 7) as well as the tile below with the filler ground (the tile position/coordinates show at the bottom left, refer to the image below):
On a new scene called Player with a KinematicBody2D node named Player as the root of the scene, then for the children: AnimatedSprite as Sprite, CollisionShape2D as Collision (with a circle shape) and 3 AudioStreamPlayers for JumpSound, DeadSound and HitSound. Not sure if it’s a good practice to have the audio here, since I did that at the end, pretty lazy. Then, attach a script to the Player node and then it should look like this:
-
-
Select the Player node and set the CollisionShape2D/Collision/Layer to 1 and the CollisionObject2D/Collision/Mask to 2 and 3 (ground and pipe).
-
For the Sprite node, when selecting it click on the (empty) for the AnimatedSprite/Frames property and click New SpriteFrames, click again where the (empty) used to be and ane window should open on the bottom:
-
-
Right off the bat, set the Speed to 10 FPS (bottom left) and rename default to bird_1. With the bird_1 selected, click on the Add frames from a Sprite Sheet, which is the second button under Animation Frames: which looks has an icon of a small grid (next to the folder icon), a new window will popup where you need to select the respective sprite sheet to use and configure it for importing. On the Select Frames window, change the Vertical to 1, and then select all 4 frames (Ctrl + Scroll wheel to zoom in):
-
-
After that, the SpriteFrames window should look like this:
-
-
Finally, make sure the Sprite node has the AnimatedSprite/Animation is set to bird_1 and that the Collision node is configured correctly for its size and position (I just have it as a radius of 7). As well as dropping the SFX files into the corresponding AudioStreamPlayer (into the AudioStreamPlayer/Stream property).
These are really simple scenes that don’t require much setup:
-
-
CeilingDetector: just an Area2D node with a CollisionShape2D in the form of a rectangle (CollisionShape2D/Shape/extents to (120, 10)), stretched horizontally so it fits the whole screen. CollisionObject2D/Collision/Layer set to bit 4 (ceiling) and CollisionObject2D/Collision/Mask set to bit 1 (player).
-
ScoreDetector: similar to the CeilingDetector, but vertical (CollisionShape2D/Shape/extents to (2.5, 128)) and CollisionObject2D/Collision/Layer set to bit 1 (player).
-
WorldDetector: Node2D with a script attached, and 3 RayCast2D as children:
-
NewTile: Raycast2D/Enabled to true (checked), Raycast2D/Cast To(0, 400), Raycast2D/Collision Mask to bit 2 (ground) and Node2D/Transform/Position to (152, -200)
-
OldTile: same as “NewTile”, except for the Node2D/Transform/Position, set it to (-152, -200).
-
OldPipe: same as “OldTile”, except for the Raycast2D/Collision Mask, set it to bit 3 (pipe).
This is the actual Game scene that holds all the playable stuff, here we will drop in all the previous scenes; the root node is a Node2D and also has an attached script. Also need to add 2 additional AudioStreamPlayers for the “start” and “score” sounds, as well as a Sprite for the background (Sprite/Offset/Offset set to (0, 10)) and a Camera2D (Camera2D/Current set to true (checked)). It should look something like this:
-
-
The scene viewport should look something like the following:
We need some font Resources to style the Label fonts. Under the FileSystem window, right click on the fonts directory (create one if needed) and click on New Resource... and select DynamicFontData, save it in the “fonts” directory as SilverDynamicFontData.tres (Silver as it is the font I’m using) then double click the just created resource and set the DynamicFontData/Font Path to the actual Silver.ttf font (or whatever you want).
-
Then create a new resource and this time select DynamicFont, name it SilverDynamicFont.tres, then double click to edit and add the SilverDynamicFontData.tres to the DynamicFont/Font/Font Data property (and I personally toggled off the DynamicFont/Font/Antialiased property), now just set the DynamicFont/Settings/(Size, Outline Size, Outline Color) to 32, 1 and black, respectively (or any other values you want). It should look something like this:
-
-
Do the same for another DynamicFont which will be used for the score label, named SilverScoreDynamicFont.tres. Only changes are Dynamic/Settings/(Size, Outline Size) which are set to 128 and 2, respectively. The final files for the fonts should look something like this:
This has a bunch of nested nodes, so I’ll try to be concise here. The root node is a CanvasLayer named UI with its own script attached, and for the children:
-
-
MarginContainer: MarginContainer with Control/Margin/(Left, Top) set to 10 and Control/Margin/(Right, Bottom) set to -10.
-
InfoContainer: VBoxContainer with Control/Theme Overrides/Constants/Separation set to 250.
-
ScoreContainer: VBoxContainer.
-
Score: Label with Label/Align set to Center, Control/Theme Overrides/Fonts/Font to the SilverScoreDynamicFont.tres, if needed adjust the DynamicFont settings.
-
HighScore: same as Score, escept for the Control/Theme Overrides/Fonts/Font which is set to SilverDynamicFont.tres.
-
-
-
StartGame: Same as HighScore.
-
-
-
DebugContainer: VBoxContainer.
-
FPS: Label.
-
-
-
VersionContainer: VBoxContainer with BoxContainer/Alignment set to Begin.
This is the final scene where we connect the Game and the UI. It’s made of a Node2D with it’s own script attached and an instance of Game and UI as it’s children.
-
This is a good time to set the default scene when we run the game by going to Project -> Project settings… -> General and in Application/Run set the Main Scene to the Main.tscn scene.
I’m going to keep this scripting part to the most basic code blocks, as it’s too much code, for a complete view you can head to the source code.
-
As of now, the game itself doesn’t do anything if we hit play. The first thing to do so we have something going on is to do the minimal player scripting.
The most basic code needed so the bird goes up and down is to just detect jump key presses and add a negative jump velocity so it goes up (y coordinate is reversed in godot…), we also check the velocity sign of the y coordinate to decide if the animation is playing or not.
You can play it now and you should be able to jump up and down, and the bird should stop on the ground (although you can keep jumping). One thing to notice is that when doing sprite.stop() it stays on the last frame, we can fix that using the code below (and then change sprite.stop() for _stop_sprite()):
-
func _stop_sprite() -> void:
- if sprite.playing:
- sprite.stop()
- if sprite.frame != 0:
- sprite.frame = 0
-
-
Where we just check that the last frame has to be the frame 0.
-
Now just a matter of adding other needed code for moving horizontally, add sound by getting a reference to the AudioStreamPlayers and doing sound.play() when needed, as well as handling death scenarios by adding a signal died at the beginning of the script and handle any type of death scenario using the below function:
-
func _emit_player_died() -> void:
- # bit 2 corresponds to pipe (starts from 0)
- set_collision_mask_bit(2, false)
- dead = true
- SPEED = 0.0
- emit_signal("died")
- # play the sounds after, because yield will take a bit of time,
- # this way the camera stops when the player "dies"
- velocity.y = -DEATH_JUMP_VELOCITY
- velocity = move_and_slide(velocity)
- hit_sound.play()
- yield(hit_sound, "finished")
- dead_sound.play()
-
-
Finally need to add the actual checks for when the player dies (like collision with ground or pipe) as well as a function that listens to a signal for when the player goes to the ceiling.
The code is pretty simple, we just need a way of detecting if we ran out of ground and send a signal, as well as sending as signal when we start detecting ground/pipes behind us (to remove it) because the world is being generated as we move. The most basic functions needed are:
We need to keep track of 3 “flags”: ground_was_colliding, ground_now_colliding and pipe_now_colliding (and their respective signals), which are going to be used to do the checks inside _physics_process. For example for checking for new ground: ground_now_colliding = _now_colliding(old_ground, ground_now_colliding, "ground_started_colliding").
This script is what handles the GroundTileMap as well as the PipeTileMap and just basically functions as a “Signal bus” connecting a bunch of signals from the WorldDetector with the TileMaps and just tracking how many pipes have been placed:
This is the node that actually places the ground tiles upong receiving a signal. In general, what you want is to keep track of the newest tile that you need to place (empty spot) as well as the last tile that is in the tilemap (technically the first one if you count from left to right). I was experimenting with enums so I used them to define the possible Ground tiles:
This way you can just select the tile by doing Ground.TILE_1, which will correspond to the int value of 0. So most of the code is just:
-
# old_tile is the actual first tile, whereas the new_tile_position
-# is the the next empty tile; these also correspond to the top tile
-const _ground_level: int = 7
-const _initial_old_tile_x: int = -8
-const _initial_new_tile_x: int = 11
-var old_tile_position: Vector2 = Vector2(_initial_old_tile_x, _ground_level)
-var new_tile_position: Vector2 = Vector2(_initial_new_tile_x, _ground_level)
-
-
-func _place_new_ground() -> void:
- set_cellv(new_tile_position, _get_random_ground())
- set_cellv(new_tile_position + Vector2.DOWN, Ground.TILE_DOWN_1)
- new_tile_position += Vector2.RIGHT
-
-
-func _remove_first_ground() -> void:
- set_cellv(old_tile_position, -1)
- set_cellv(old_tile_position + Vector2.DOWN, -1)
- old_tile_position += Vector2.RIGHT
-
-
Where you might notice that the _initial_new_tile_x is 11, instead of 10, refer to Default ground tiles where we placed tiles from -8 to 10, so the next empty one is 11. These _place_new_ground and _remove_first_ground functions are called upon receiving the signal.
This is really similar to the GroundTileMap code, instead of defining an enum for the ground tiles, we define it for the pipe patterns (because each pipe is composed of multiple pipe tiles). If your pipe tile set looks like this (notice the index):
Now, the pipe system requires a bit more of tracking as we need to instantiate a ScoreDetector here, too. I ended up keeping track of the placed pipes/detectors by using a “pipe stack” (and “detector stack”) which is just an array of placed objects from which I pop the first when deleting them:
-
onready var _pipe_sep: int = get_parent().PIPE_SEP
-const _pipe_size: int = 16
-const _ground_level: int = 7
-const _pipe_level_y: int = _ground_level - 1
-const _initial_new_pipe_x: int = 11
-var new_pipe_starting_position: Vector2 = Vector2(_initial_new_pipe_x, _pipe_level_y)
-var pipe_stack: Array
-
-# don't specify type for game, as it results in cyclic dependency,
-# as stated here: https://godotengine.org/qa/39973/cyclic-dependency-error-between-actor-and-actor-controller
-onready var game = get_parent().get_parent()
-var detector_scene: PackedScene = preload("res://levels/detectors/score_detector/ScoreDetector.tscn")
-var detector_offset: Vector2 = Vector2(16.0, -(_pipe_size / 2.0) * 16.0)
-var detector_stack: Array
-
-
The detector_offset is just me being picky. For placing a new pipe, we get the starting position (bottom pipe tile) and build upwards, then instantiate a new ScoreDetector (detector_scene) and set it’s position to the pipe starting position plus the offset, so it’s centered in the pipe, then just need to connect the body_entered signal from the detector with the game, so we keep track of the scoring. Finally just add the placed pipe and detector to their corresponding stacks:
For removing pipes, it’s really similar but instead of getting the position from the next tile, we pop the first element from the (pipe/detector) stack and work with that. To remove the cells we just set the index to -1:
-
func _remove_old_pipe() -> void:
- var current_pipe: Vector2 = pipe_stack.pop_front()
- var c: int = 0
- while c < _pipe_size:
- set_cellv(current_pipe, -1)
- current_pipe += Vector2.UP
- c += 1
-
- var detector: Area2D = detector_stack.pop_front()
- remove_child(detector)
- detector.queue_free()
-
-
These functions are called when receiving the signal to place/remove pipes.
Before proceeding, we require a way to save/load data (for the high scores). We’re going to use the ConfigFile node that uses a custom version of the ini file format. Need to define where to save the data:
Then, whenever this script is loaded we load the data and if it’s a new file, then add the default high score of 0:
-
func _ready() -> void:
- _load_data()
-
- if not _data.has_section(SCORE_SECTION):
- set_new_high_score(0)
- save_data()
-
-
Now, this script in particular will need to be a Singleton (AutoLoad), which means that there will be only one instance and will be available across all scripts. To do so, go to Project -> Project settings… -> AutoLoad and select this script in the Path: and add a Node Name: (I used SavedData, if you use something else, be careful while following this devlog) which will be the name we’ll use to access the singleton. Toggle on Enable if needed, it should look like this:
The game script it’s also like a “Signal bus” in the sense that it connects all its childs’ signals together, and also has the job of starting/stopping the _process and _physics_process methods from the childs as needed. First, we need to define the signals and and references to all child nodes:
-
signal game_started
-signal game_over
-signal new_score(score, high_score)
-
-onready var player: Player = $Player
-onready var background: Sprite= $Background
-onready var world_tiles: WorldTiles = $WorldTiles
-onready var ceiling_detector: Area2D = $CeilingDetector
-onready var world_detector: Node2D = $WorldDetector
-onready var camera: Camera2D = $Camera
-onready var start_sound: AudioStreamPlayer = $StartSound
-onready var score_sound: AudioStreamPlayer = $ScoreSound
-
-
It’s important to get the actual “player speed”, as we’re using a scale to make the game look bigger (remember, pixel art), to do so we need a reference to the game_scale we setup at the beginning and compute the player_speed:
-
var _game_scale: float = ProjectSettings.get_setting("application/config/game_scale")
-var player_speed: float
-
-
-func _ready() -> void:
- scale = Vector2(_game_scale, _game_scale)
- # so we move at the actual speed of the player
- player_speed = player.SPEED / _game_scale
-
-
This player_speed will be needed as we need to move all the nodes (Background, Camera, etc.) in the x axis as the player is moving. This is done in the _physics_process:
Where the player is a special case, as when the player dies, it should still move (only down), else it would just freeze in place. In _ready we connect all the necessary signals as well as initially set the processing to false using the last function. To start/restart the game we need to keep a flag called is_game_running initially set to false and then handle the (re)startability in _input:
-
func _input(event: InputEvent) -> void:
- if not is_game_running and event.is_action_pressed("jump"):
- _set_processing_to(true)
- is_game_running = true
- emit_signal("game_started")
- start_sound.play()
-
- if event.is_action_pressed("restart"):
- get_tree().reload_current_scene()
-
When the player dies, we set all processing to false, except for the player itself (so it can drop all the way to the ground). Also, when receiving a “scoring” signal, we manage the current score, as well as saving the new high score when applicable, note that we need to read the high_score at the beginning by calling SavedData.get_high_score(). This signal we emit will be received by the UI so it updates accordingly.
First thing is to get a reference to all the child Labels, an initial reference to the high score as well as the version defined in the project settings:
-
onready var fps_label: Label = $MarginContainer/DebugContainer/FPS
-onready var version_label: Label = $MarginContainer/VersionContainer/Version
-onready var score_label: Label = $MarginContainer/InfoContainer/ScoreContainer/Score
-onready var high_score_label: Label = $MarginContainer/InfoContainer/ScoreContainer/HighScore
-onready var start_game_label: Label = $MarginContainer/InfoContainer/StartGame
-
-onready var _initial_high_score: int = SavedData.get_high_score()
-
-var _version: String = ProjectSettings.get_setting("application/config/version")
-
-
Then set the initial Label values as well as making the fps_label invisible:
At this point the game should be fully playable (if any detail missing feel free to look into the source code linked at the beginning). Only thing missing is an icon for the game; I did one pretty quicly with the assets I had.
If you followed the directory structure I used, then only thing needed is to transform the icon to a native Windows ico format (if exporting to Windows, else ignore this part). For this you need ImageMagick or some other program that can transform png (or whatever file format you used for the icon) to ico. I used [Chocolatey][https://chocolatey.org/] to install imagemagick, then to convert the icon itself used: magick convert icon.png -define icon:auto-resize=256,128,64,48,32,16 icon.ico as detailed in Godot‘s Changing application icon for Windows.
You need to download the templates for exporting as detailed in Godot‘s Exporting projects. Basically you go to Editor -> Manage Export Templates… and download the latest one specific to your Godot version by clicking on Download and Install.
-
If exporting for Windows then you also need to download rcedit from here. Just place it wherever you want (I put it next to the Godot executable).
-
Then go to Project -> Export… and the Window should be empty, add a new template by clicking on Add... at the top and then select the template you want. I used HTML5, Windows Desktop and Linux/X11. Really the only thing you need to set is the “Export Path” for each template, which is te location of where the executable will be written to, and in the case of the Windows Desktop template you could also setup stuff like Company Name, Product Name, File/Product Version, etc..
-
Once the templates are setup, select any and click on Export Project at the bottom, and make sure to untoggle Export With Debug in the window that pops up, this checkbox should be at the bottom of the new window.
]]>
-
-
- General Godot project structure
- https://blog.luevano.xyz/g/godot_project_structure.html
- https://blog.luevano.xyz/g/godot_project_structure.html
- Sun, 22 May 2022 01:16:10 GMT
- English
- Gamedev
- Godot
- Short
- Details on the project structure I'm using for Godot, based on preference and some research I did.
- One of my first issues when starting a project is how to structure everything. So I had to spend some time researching best practices and go with what I like the most and after trying some of them I wanted to write down somewhere what I’m sticking with.
-
The first place to look for is, of course, the official Godot documentation on Project organization; along with project structure discussion, also comes with best practices for code style and what-not. I don’t like this project/directory structure that much, just because it tells you to bundle everything under the same directory but it’s a really good starting point, for example it tells you to use:
-
-
/models/town/house/
-
house.dae
-
window.png
-
door.png
-
-
-
-
Where I would prefer to have more modularity, for example:
It might look like it’s more work, but I prefer it like this. I wish this site was still available, as I got most of my ideas from there and was a pretty good resource, but apparently the owner is not maintaining his site anymore; but there is this excelent comment on reddit which shows a project/directory structure more in line with what I’m currently using (and similr to the site that is down that I liked). I ended up with:
-
-
/.git
-
/assets (raw assets/editable assets/asset packs)
-
/releases (executables ready to publish)
-
/src (the actual godot project)
-
.godot/
-
actors/ (or entities)
-
player/
-
sprites/
-
player.x
-
…
-
-
-
enemy/ (this could be a dir with subdirectories for each type of enemy for example…)
-
sprites/
-
enemy.x
-
…
-
-
-
actor.x
-
…
-
-
-
levels/ (or scenes)
-
common/
-
sprites/
-
…
-
-
-
main/
-
…
-
-
-
overworld/
-
…
-
-
-
dugeon/
-
…
-
-
-
Game.tscn (I’m considering the “Game” as a level/scene)
-
game.gd
-
-
-
objects/
-
box/
-
…
-
-
-
…
-
-
-
screens/
-
main_menu/
-
…
-
-
-
…
-
-
-
globals/ (singletons/autoloads)
-
ui/
-
menus/
-
…
-
-
-
…
-
-
-
sfx/
-
…
-
-
-
vfx/
-
…
-
-
-
etc/
-
…
-
-
-
Main.tscn (the entry point of the game)
-
main.gd
-
icon.png (could also be on a separate “icons” directory)
-
project.godot
-
…
-
-
-
\<any other repository related files>
-
-
And so on, I hope the idea is clear. I’ll probably change my mind on the long run, but for now this has been working fine.
]]>
-
-
- Will start blogging about gamedev
- https://blog.luevano.xyz/g/starting_gamedev_blogging.html
- https://blog.luevano.xyz/g/starting_gamedev_blogging.html
- Tue, 17 May 2022 05:19:54 GMT
- English
- Gamedev
- Godot
- Short
- Update
- Since I'm starting to get more into gamedev stuff, I'll start blogging about it just to keep consistent.
- I’ve been wanting to get into gamedev for a while now, but it’s always a pain to stay consistent. I just recently started to get into it again, and this time I’m trying to actually do stuff.
-
So, the plan is to blog about my progress and clone some simple games just to get started. I’m thinking on sticking with Godot just because I like that it’s open source, it’s getting better and better overtime (big rewrite happening right now) and I already like how the engine works. Specifically I’ll start using Godot 4 even though it’s not done yet, to get used to the new features, specifically pumped for GDScript 2.0. Actually… (for the small clones/ripoffs) I’ll need to use Godot 3.X (probably 3.5), as Godot 4 doesn’t have support to export to webassembly (HTML5) yet, and I want that to publish to itch.io and my website. I’ll continue to use Godot 4 for bigger projects, as they will take longer and I hope that by the time I need to publish, there’s no issues to export.
-
For a moment I almost started a new subdomain just for gamedev stuff, but decided to just use a different directory for subtleness; this directory and use of tags should be enough. I’ll be posting the entry about the first rip-off I’m developing (FlappyBird L O L) shortly.
-
Update: Godot 4 already released and it now has HTML5 support, so back to the original plan.
]]>
-
-
- My setup for a password manager and MFA authenticator
- https://blog.luevano.xyz/a/password_manager_authenticator_setup.html
- https://blog.luevano.xyz/a/password_manager_authenticator_setup.html
- Sun, 15 May 2022 22:40:34 GMT
- English
- Short
- Tools
- A short description on my personal setup regarding a password manager and alternatives to G\*\*gl\* authenticator.
- Disclaimer: I won’t go into many technical details here of how to install/configure/use the software, this is just supposed to be a short description on my setup.
-
It’s been a while since I started using a password manager at all, and I’m happy that I started with KeePassXC (open source, multiplatform password manager that it’s completely offline) as a direct recommendation from EL ELE EME; before this I was using the same password for everything (like a lot of people), which is a well know privacy issue as noted in detail by Leo (I don’t personally recommed LastPass as Leo does). Note that you will still need a master password to lock/unlock your password database (you can additionally use a hardware key and a key file).
-
Anyways, setting up keepass is pretty simple, as there is a client for almost any device; note that keepass is basically just the format and the base for all of the clients, as its common with pretty much any open source software. In my case I’m using KeePassXC in my computer and KeePassDX in my phone (Android). The only concern is keeping everything in sync because keepass doesn’t have any automatic method of synchronizing between devices because of security reasons (as far as I know), meaning that you have to manage that yourself.
-
Usually you can use something like G**gl* drive, dropbox, mega, nextcloud, or any other cloud solution that you like to sync your keepass database between devices; I personally prefer to use Syncthing as it’s open source, it’s really easy to setup and has worked wonders for me since I started using it, also it keeps versions of your files that can serve as backups in any scenario where the database gets corrupted or something.
-
Finally, when I went through the issue with the micro SD and the adoptable storage bullshit (you can find the rant here, in spanish) I had to also migrate from G**gl* authenticator (gauth) to something else for the simple reason that gauth doesn’t even let you do backups, nor it’s synched with your account… nothing, it is just standalone and if you ever lose your phone you’re fucked; so I decided to go with Aegis authenticator, as it is open source, you have control over all your secret keys, you can do backups directly to the filesystem, you can secure your database with an extra password, etc., etc.. In general aegis is the superior MFA authenticator (at least compared with gauth) and everything that’s compatible with gauth is compatible with aegis as the format is a standard (as a matter of fact, keepass also has this MFA feature which is called TOPT and is also compatible, but I prefer to have things separate). I also use syncthing to keep a backup of my aegis database.
-
TL;DR:
-
-
Syncthing to sync files between devices (for the password databases).
-
KeePassXC for the password manager in my computer.
]]>
-
-
- Los devs de Android/MIUI me trozaron
- https://blog.luevano.xyz/a/devs_android_me_trozaron.html
- https://blog.luevano.xyz/a/devs_android_me_trozaron.html
- Sun, 15 May 2022 09:51:04 GMT
- Rant
- Spanish
- Update
- Perdí un día completo resolviendo un problema muy estúpido, por culpa de los devs de Android/MIUI.
- Llevo dos semanas posponiendo esta entrada porque andaba bien enojado (todavía, pero ya se anda pasando) y me daba zzz. Pero bueno, antes que nada este pex ocupa un poco de contexto sobre dos cositas:
-
-
Tachiyomi: Una aplicación de android que uso para descargar y leer manga. Lo importante aquí es que por default se guardan los mangas con cada página siendo una sola imagen, por lo que al mover el manga de un lado a otro tarda mucho tiempo.
-
Adoptable storage: Un feature de android que básicamente te deja usar una micro SD (mSD) externa como si fuera interna, encriptando y dejando la mSD inutilizable en cualquier otro dispositivo. La memoria interna se pierde o algo por el estilo (bajo mi experiencia), por lo que parece es bastante útil cuando la capacidad de la memoria interna es baja.
-
-
Ahora sí vamonos por partes, primero que nada lo que sucedió fue que ordené una mSD con más capacidad que la que ya tenía (64 GB -> 512 GB, poggies), porque últimamente he estado bajando y leyendo mucho manga entonces me estaba quedando sin espacio. Ésta llegó el día de mi cumpleaños lo cuál estuvo chingón, me puse a hacer backup de la mSD que ya tenía y preparando todo, muy bonito, muy bonito.
-
Empecé a tener problemas, porque al estar moviendo tanto archivo pequeño (porque recordemos que el tachiyomi trata a cada página como una sola imagen), la conexión entre el celular y mi computadora se estaba corte y corte por alguna razón; en general muchos pedos. Por lo que mejor le saqué la nueva mSD y la metí directo a mi computadora por medio de un adaptador para batallar menos y que fuera más rápido.
-
Hacer este pedo de mover archivos directamente en la mSD puede llevar a corromper la memoria, no se los detalles pero pasa (o quizá estoy meco e hice algo mal). Por lo que al terminar de mover todo a la nueva mSD y ponerla en el celular, éste se emputó que porque no la detectaba y que quería tirar un formateo a la mSD. A este punto no me importaba mucho, sólo era questión de volvera mover archivos y ser más cuidadoso; “no issues from my end” diría en mis standups.
-
Todo valió vergota porque en cierto punto al elegir sí formatear la mSD mi celular me daba la opción de “usar la micro SD para el celular” o “usar la micro SD como memoria portátil” (o algo entre esas líneas), y yo, estúpidamente, elegí la primera, porque me daba sentido: “no, pues simón, voy a usar esta memoria para este celular”.
-
Pues mamé, resulta que esa primera opción lo que realmente quería decir es que se iba a usar la micro SD como interna usando el pex este de adoptable storage. Entonces básicamente perdí mi capacidad de memoria interna (128 GB aprox.), y toda la mSD nueva se usó como memoria interna. Todo se juntó, si intentaba sacar la mSD todo se iba a la mierda y no podía usar muchas aplicaciones. “No hay pedo”, pensé, “nada más es cuestión de desactivar esta mamada de adoptable storage”.
-
Ni madres dijeron los devs de Android, este pedo nada más es un one-way: puedes activar adoptable storage pero para desactivarlo ocupas, a huevo, formatear tu celular a estado de fábrica. Chingué a mi madre, comí mierda, perdí.
-
Pues eso fue lo que hice, ni modo. Hice backup de todo lo que se me ocurrió (también me di cuenta que G**gl* authenticator es cagada ya que no te deja hacer backup, entre otras cosas, mejor usen Aegis authenticator), desactivé todo lo que se tenía que desactivar y tocó hacer factory reset, ni modo. Pero como siempre las cosas salen mal y tocó comer mierda del banco porque me bloquearon la tarjeta, perdí credenciales necesarias para el trabajo (se resolvió rápido), etc., etc.. Ya no importa, ya casi todo está resuelto, sólo queda ir al banco a resolver lo de la tarjeta bloqueada (esto es para otro rant, pinches apps de bancos piteras, ocupan hacer una sola cosa y la hacen mal).
-
Al final del día, la causa del problema fueron los malditos mangas (por andar queriendo backupearlos), que terminé bajando de nuevo manualmente y resultó mejor porque aparentemente tachiyomi agregó la opción de “zippear” los mangas en formato CBZ, por lo que ya son más fácil de mover de un lado para otro, el fono no se queda pendejo, etc., etc..
-
Por último, quiero decir que los devs de Android son unos pendejos por no hacer reversible la opción de adoptable storage, y los de MIUI son todavía más por no dar detalles de lo que significan sus opciones de formateo, especialmente si una opción es tan chingadora que para revertirla necesitas formatear a estado de fábrica tu celular; más que nada es culpa de los de MIUI, todavía que ponen un chingo de A(i)DS en todas sus apps, no pueden poner una buena descripción en sus opciones. REEEE.
]]>
-
-
- Volviendo a usar la página
- https://blog.luevano.xyz/a/volviendo_a_usar_la_pagina.html
- https://blog.luevano.xyz/a/volviendo_a_usar_la_pagina.html
- Thu, 28 Apr 2022 03:21:02 GMT
- Short
- Spanish
- Update
- Actualización en el estado de la página, después de mucho tiempo de ausencia.
- Después de mucho tiempo de estar luchando con querer volver a usar este pex (maldita d word y demás), ya me volví a acomodar el setup para agregar nuevas entradas.
-
Entre las cosas que tuve que hacer fue actualizar el pyssg porque no lo podía usar de una como estaba; y de pasado le agregue una que otra feature nueva. Luego quiero agregarle más funcionalidad para poder buildear la página completa; por ahora se hace en segmentos: todo lo de luevano.xyz está hecho manual, mientras que blog y art usan pyssg.
-
Otra cosa es que quizá me devuelva a editar alguans entradas nada más para homogeneizar las entradas específicas a Create a… (tiene más sentido que sean Setup x… o algo similar).
-
En otras noticias, estoy muy agusto en el jale que tengo actualmente aunque lleve alrededor de 3 semanas de un infierno en el jale. Debo pensar en si debo omitir cosas personales o del trabajo aquí, ya que quién sabe quién se pueda llegar a topar con esto *thinking emoji*.
]]>
-
-
- Set up a VPN server with OpenVPN
- https://blog.luevano.xyz/a/vpn_server_with_openvpn.html
- https://blog.luevano.xyz/a/vpn_server_with_openvpn.html
- Sun, 01 Aug 2021 09:27:02 GMT
- Code
- English
- Server
- Tools
- Tutorial
- How to set up a VPN server using OpenVPN on a server running Nginx, on Arch. Only for IPv4.
- I’ve been wanting to do this entry, but had no time to do it since I also have to set up the VPN service as well to make sure what I’m writing makes sense, today is the day.
-
This will be installed and working alongside the other stuff I’ve wrote about on other posts (see the server tag). All commands here are executes as root unless specified otherwise. Also, this is intended only for IPv4 (it’s not that hard to include IPv6, but meh). As always, all commands are executed as root unless stated otherwise.
Working server with root access, and with ufw as the firewall.
-
Open port 1194 (default), or as a fallback on 443 (click here for more). I will do mine on port 1194 but it’s just a matter of changing 2 lines of configuration and one ufw rule.
PKI stands for Public Key Infrastructure and basically it’s required for certificates, private keys and more. This is supposed to work between two servers and one client: a server in charge of creating, signing and verifying the certificates, a server with the OpenVPN service running and the client making the request.
-
In a nutshel, this is supposed to work something like: 1) a client wants to use the VPN service, so it creates a requests and sends it to the signing server, 2) this server checks the requests and signs the request, returning the certificates to both the VPN service and the client and 3) the client can now connect to the VPN service using the signed certificate which the OpenVPN server knows about.
-
That’s how the it should be st up… but, to be honest, all of this is a hassle and (in my case) I want something simple to use and manage. So I’m gonna do all on one server and then just give away the configuration file for the clients, effectively generating files that anyone can run and will work, meaning that you need to be careful who you give this files (it also comes with a revoking mechanism, so no worries).
OpenVPN is a robust and highly flexible VPN daemon, that’s pretty complete feature-wise.
-
Install the openvpn package:
-
pacman -S openvpn
-
-
Now, most of the stuff is going to be handled by (each, if you have more than one) server configuration. This might be the hardest thing to configure, but I’ve used a basic configuration file that worked a lot to me, which is a compilation of stuff that I found on the internet while configuring the file a while back.
-
# Server ip addres (ipv4).
-local 1.2.3.4 # your server public ip
-
-# Port.
-port 1194 # Might want to change it to 443
-
-# TCP or UDP.
-;proto tcp
-proto udp # If ip changes to 443, you should change this to tcp, too
-
-# "dev tun" will create a routed IP tunnel,
-# "dev tap" will create an ethernet tunnel.
-;dev tap
-dev tun
-
-# Server specific certificates and more.
-ca /etc/easy-rsa/pki/ca.crt
-cert /etc/easy-rsa/pki/issued/server.crt
-key /etc/easy-rsa/pki/private/server.key # This file should be kept secret.
-dh /etc/openvpn/server/dh.pem
-auth SHA512
-tls-crypt /etc/openvpn/server/ta.key 0 # This file is secret.
-crl-verify /etc/easy-rsa/pki/crl.pem
-
-# Network topology.
-topology subnet
-
-# Configure server mode and supply a VPN subnet
-# for OpenVPN to draw client addresses from.
-server 10.8.0.0 255.255.255.0
-
-# Maintain a record of client <-> virtual IP address
-# associations in this file.
-ifconfig-pool-persist ipp.txt
-
-# Push routes to the client to allow it
-# to reach other private subnets behind
-# the server.
-;push "route 192.168.10.0 255.255.255.0"
-;push "route 192.168.20.0 255.255.255.0"
-
-# If enabled, this directive will configure
-# all clients to redirect their default
-# network gateway through the VPN, causing
-# all IP traffic such as web browsing and
-# and DNS lookups to go through the VPN
-push "redirect-gateway def1 bypass-dhcp"
-
-# Certain Windows-specific network settings
-# can be pushed to clients, such as DNS
-# or WINS server addresses.
-# Google DNS.
-;push "dhcp-option DNS 8.8.8.8"
-;push "dhcp-option DNS 8.8.4.4"
-
-# The keepalive directive causes ping-like
-# messages to be sent back and forth over
-# the link so that each side knows when
-# the other side has gone down.
-keepalive 10 120
-
-# The maximum number of concurrently connected
-# clients we want to allow.
-max-clients 5
-
-# It's a good idea to reduce the OpenVPN
-# daemon's privileges after initialization.
-user nobody
-group nobody
-
-# The persist options will try to avoid
-# accessing certain resources on restart
-# that may no longer be accessible because
-# of the privilege downgrade.
-persist-key
-persist-tun
-
-# Output a short status file showing
-# current connections, truncated
-# and rewritten every minute.
-status openvpn-status.log
-
-# Set the appropriate level of log
-# file verbosity.
-#
-# 0 is silent, except for fatal errors
-# 4 is reasonable for general usage
-# 5 and 6 can help to debug connection problems
-# 9 is extremely verbose
-verb 3
-
-# Notify the client that when the server restarts so it
-# can automatically reconnect.
-# Only usable with udp.
-explicit-exit-notify 1
-
-
# and ; are comments. Read each and every line, you might want to change some stuff (like the logging), specially the first line which is your server public IP.
Now, we need to enable packet forwarding (so we can access the web while connected to the VPN), which can be enabled on the interface level or globally (you can check the different options with sysctl -a | grep forward). I’ll do it globally, run:
-
sysctl net.ipv4.ip_forward=1
-
-
And create/edit the file /etc/sysctl.d/30-ipforward.conf:
-
net.ipv4.ip_forward=1
-
-
Now we need to configure ufw to forward traffic through the VPN. Append the following to /etc/default/ufw (or edit the existing line):
-
...
-DEFAULT_FORWARD_POLICY="ACCEPT"
-...
-
-
And change the /etc/ufw/before.rules, appending the following lines after the header but before the *filter line:
-
...
-# NAT (Network Address Translation) table rules
-*nat
-:POSTROUTING ACCEPT [0:0]
-
-# Allow traffic from clients to the interface
--A POSTROUTING -s 10.8.0.0/24 -o interface -j MASQUERADE
-
-# do not delete the "COMMIT" line or the NAT table rules above will not be processed
-COMMIT
-
-# Don't delete these required lines, otherwise there will be errors
-*filter
-...
-
-
Where interface must be changed depending on your system (in my case it’s ens3, another common one is eth0); I always check this by running ip addr which gives you a list of interfaces (the one containing your server public IP is the one you want, or whatever interface your server uses to connect to the internet):
-
...
-2: ens3: <SOMETHING,SOMETHING> bla bla
- link/ether bla:bla
- altname enp0s3
- inet my.public.ip.addr bla bla
-...
-
-
And also make sure the 10.8.0.0/24 matches the subnet mask specified in the server.conf file (in this example it matches). You should check this very carefully, because I just spent a good 2 hours debugging why my configuration wasn’t working, and this was te reason (I could connect to the VPN, but had no external connection to the web).
-
Finally, allow the OpenVPN port you specified (in this example its 1194/udp) and reload ufw:
You might notice that I didn’t specify how to actually connect the VPN. For that we need a configuration file similar to the server.conf file that we created.
-
The real way of doing this would be to run similar steps as the ones with easy-rsa locally, send them to the server, sign them, and retrieve them. Fuck all that, we’ll just create all configuration files on the server as I was mentioning earlier.
-
Also, the client configuration file has to match the server one (to some degree), to make this easier you can create a client-common file in /etc/openvpn/server with the following content:
-
client
-dev tun
-remote 1.2.3.4 1194 udp # change this to match your ip and port
-resolv-retry infinite
-nobind
-persist-key
-persist-tun
-remote-cert-tls server
-auth SHA512
-verb 3
-
-
Where you should make any changes necessary, depending on your configuration.
-
Now, we need a way to create and revoke new configuration files. For this I created a script, heavily based on one of the links I mentioned at the beginning. You can place these scripts anywhere you like, and you should take a look before running them because you’ll be running them with elevated privileges (sudo).
-
In a nutshell, what it does is: generate a new client certificate keypair, update the CRL and create a new .ovpn configuration file that consists on the client-common data and all of the required certificates; or, revoke an existing client and refresh the CRL. The file is placed under ~/ovpn.
-
Create a new file with the following content (name it whatever you like) and don’t forget to make it executable (chmod +x vpn_script):
-
#!/bin/sh
-# Client ovpn configuration creation and revoking.
-MODE=$1
-if [ ! "$MODE" = "new" -a ! "$MODE" = "rev" ]; then
- echo "$1 is not a valid mode, using default 'new'"
- MODE=new
-fi
-
-CLIENT=${2:-guest}
-if [ -z $2 ];then
- echo "there was no client name passed as second argument, using 'guest' as default"
-fi
-
-# Expiration config.
-EASYRSA_CERT_EXPIRE=3650
-EASYRSA_CRL_DAYS=3650
-
-# Current PWD.
-CPWD=$PWD
-cd /etc/easy-rsa/
-
-if [ "$MODE" = "rev" ]; then
- easyrsa --batch revoke $CLIENT
-
- echo "$CLIENT revoked."
-elif [ "$MODE" = "new" ]; then
- easyrsa build-client-full $CLIENT nopass
-
- # This is what actually generates the config file.
- {
- cat /etc/openvpn/server/client-common
- echo "<ca>"
- cat /etc/easy-rsa/pki/ca.crt
- echo "</ca>"
- echo "<cert>"
- sed -ne '/BEGIN CERTIFICATE/,$ p' /etc/easy-rsa/pki/issued/$CLIENT.crt
- echo "</cert>"
- echo "<key>"
- cat /etc/easy-rsa/pki/private/$CLIENT.key
- echo "</key>"
- echo "<tls-crypt>"
- sed -ne '/BEGIN OpenVPN Static key/,$ p' /etc/openvpn/server/ta.key
- echo "</tls-crypt>"
- } > "$(eval echo ~${SUDO_USER:-$USER}/ovpn/$CLIENT.ovpn)"
-
- eval echo "~${SUDO_USER:-$USER}/ovpn/$CLIENT.ovpn file generated."
-fi
-
-# Finish up, re-generates the crl
-easyrsa gen-crl
-chown nobody:nobody pki/crl.pem
-chmod o+r pki/crl.pem
-cd $CPWD
-
-
And the way to use is to run bash vpn_script <mode> <client_name> where mode is new or rev (revoke) as sudo (when revoking, it doesn’t actually delete the .ovpn file in ~/ovpn). Again, this is a little script that I put together, so you should check it out, it may need tweaks (specially depending on your directory structure for easy-rsa).
-
Now, just get the .ovpn file generated, import it to OpenVPN in your client of preference and you should have a working VPN service.
]]>
-
-
- Hoy me tocó desarrollo de personaje
- https://blog.luevano.xyz/a/hoy_toco_desarrollo_personaje.html
- https://blog.luevano.xyz/a/hoy_toco_desarrollo_personaje.html
- Wed, 28 Jul 2021 06:10:55 GMT
- Rant
- Spanish
- Update
- Una breve historia sobre cómo estuvo mi día, porque me tocó desarrollo de personaje y lo quiero sacar del coraje que traigo.
- Sabía que hoy no iba a ser un día tan bueno, pero no sabía que iba a estar tan horrible; me tocó desarrollo de personaje y saqué el bad ending.
-
Básicamente tenía que cumplir dos misiones hoy: ir al banco a un trámite y vacunarme contra el Covid-19. Muy sencillas tareas.
-
Primero que nada me levanté de una pesadilla horrible en la que se puede decir que se me subió el muerto al querer despertar, esperé a que fuera casi la hora de salida de mi horario de trabajo, me bañé y fui directo al banco primero. Todo bien hasta aquí.
-
En el camino al banco, durante la plática con el conductor del Uber salió el tema del horario del banco. Yo muy tranquilo dije “pues voy algo tarde, pero sí alcanzo, cierran a las 5, ¿no?” a lo que me respondió el conductor “nel jefe, a las 4, y se van media hora antes”; quedé. Chequé y efectivamente cerraban a las 4. Entonces le dije que le iba a cambiar la ruta directo a donde me iba a vacunar, pero ya era muy tarde y quedaba para la dirección opuesta.”Ni pedo, ahí déjame y pido otro viaje, no te apures”, le dije y como siempre pues me deseó que se compusiera mi día; afortunadamente el banco sí estaba abierto para lo que tenía que hacer, así que fue un buen giro. Me puse muy feliz y asumí que sería un buen día, como me lo dijo mi conductor; literalmente NO SABÍA.
-
Salí feliz de poder haber completado esa misión y poder irme a vacunar. Pedí otro Uber a donde tenía que ir y todo bien. Me tocó caminar mucho porque la entrada estaba en punta de la chingada de donde me dejó el conductor, pero no había rollo, era lo de menos. Me desanimé cuando vi que había una cantidad estúpida de gente, era una fila que abarcaba todo el estacionamiento y daba demasiadas vueltas; “ni pedo”, dije, “si mucho me estaré aquí una hora, hora y media”… otra vez, literalmente NO SABÍA.
-
Pasó media hora y había avanzado lo que parecía ser un cuarto de la fila, entonces todo iba bien. Pues nel, había avanzado el equivalente a un octavo de la fila, este pedo no iba a salir en una hora-hora y media. Para acabarla de chingar era todo bajo el tan amado sol de Chiwawa. “No hay pedo, me entretengo tirando chal con alguien en el wasap”, pues no, aparentemente no cargué el celular y ya tenía 15-20% de batería… volví a quedar.
-
Se me acabó la pila, ya había pasado una hora y parecía que la fila era infinita, simplemente avanzábamos demasiado lento, a pesar de que los que venían atrás de mí repetían una y otra vez “mira, avanza bien rápido, ya mero llegamos”, ilusos. Duré aproximadamente 3 horas formado, aguantando conversaciones estúpidas a mi alrededor, gente quejándose por estar parada (yo también me estaba quejando pero dentro de mi cabeza), y por alguna razón iban familias completas de las cuales al final del día sólo uno o dos integrantes de la familia entraban a vacunarse.
-
En fin que se acabó la tortura y ya tocaba irse al cantón, todo bien. “No hay pedo, no me tocó irme en Uber, aquí agarro un camíon” pensé. Pero no, ningún camión pasó durante la hora que estuve esperando y de los 5 taxis que intenté parar NINGUNO se detuvo. Decidí irme caminado, ya qué más daba, en ese punto ya nada más era hacer corajes dioquis.
-
En el camino vi un Oxxo y decidí desviarme para comprar algo de tomar porque andaba bien deshidratado. En el mismo segundo que volteé para ir hacia el Oxxo pasó un camión volando y lo único que pensaba era que el conductor me decía “Jeje ni pedo:)”. Exploté, me acabé, simplemente perdí, saqué el bad ending.
-
Ya estaba harto y hasta iba a comprar un cargador para ya irme rápido, estaba cansado del día, simplemente ahí terminó la quest, había sacado el peor final. Lo bueno es que se me ocurrió pedirle al cajero un cargador y que me tirara paro. Todo bien, pedí mi Uber y llegué a mi casa sano y a salvo, pero con la peor rabia que me había dado en mucho tiempo. Simplemente ¿mi culo? explotado. Este día me tocó un desarrollo de personaje muy cabrón, se mamó el D*****o.
-
Lo único rescatable fue que había una (más bien como 5) chica muy guapa en la fila, lástima que los stats de mi personaje me tienen bloqueadas las conversaciones con desconocidos.
-
Y pues ya, este pex ya me sirvió para desahogarme, una disculpa por la redacción tan pitera. Sobres.
]]>
-
-
- Tenía este pex algo descuidado
- https://blog.luevano.xyz/a/tenia_esto_descuidado.html
- https://blog.luevano.xyz/a/tenia_esto_descuidado.html
- Sun, 18 Jul 2021 07:51:50 GMT
- Short
- Spanish
- Update
- Nada más un update en el estado del blog y lo que he andado haciendo.
- Así es, tenía un poco descuidado este pex, siendo la razón principal que andaba ocupado con cosas de la vida profesional, ayay. Pero ya que ando un poco más despejado y menos estresado voy a seguir usando el blog y a ver qué más hago.
-
Tengo unas entradas pendientes que quiero hacer del estilo de “tutorial” o “how-to”, pero me lo he estado debatiendo, porque Luke ya empezó a hacerlo más de verdad en landchad.net, lo cual recomiendo bastante pues igual yo empecé a hacer esto por él (y por EL ELE EME); aunque la verdad pues es muy específico a como él hace las cosas y quizá sí puede haber diferencias, pero ya veré en estos días. La próxima que quiero hacer es sobre el VPN, porque no lo he setupeado desde que reinicié El Página Web y La Servidor, entonces acomodaré el VPN de nuevo y de pasada tiro entrada de eso.
-
También dejé un dibujo pendiente, que la neta lo dejé por 2 cosas: está bien cabrón (porque también lo quiero colorear) y porque estaba ocupado; de lo cuál ya sólo queda el está bien cabrón pero no he tenido el valor de retomarlo. Lo triste es que ya pasó el tiempo del hype y ya no tengo mucha motivación para terminarlo más que el hecho de que cuando lo termine empezaré a usar Clip Studio Paint en vez de Krita, porque compré una licencia ahora que estuvo en 50% de descuento.
-
Algo bueno es que me he estado sintiendo muy bien conmigo mismo últimamente, aunque casi no hable de eso. Sí hay una razón en específico, pero es una razón algo tonta. Espero así siga.
-
Ah, y también quería acomodarme una sección de comentarios, pero como siempre, todas las opciones están bien bloated, entonces pues me voy a hacer una en corto seguramente en Python para el back, MySQL para la base de datos y Javascript para la conexión acá en el front, algo tranqui. Nel, siempre no ocupo esto, pa’ qué.
-
Sobres pues.
]]>
-
-
- Set up an XMPP server with Prosody compatible with Conversations and Movim
- https://blog.luevano.xyz/a/xmpp_server_with_prosody.html
- https://blog.luevano.xyz/a/xmpp_server_with_prosody.html
- Wed, 09 Jun 2021 05:24:30 GMT
- Code
- English
- Server
- Tools
- Tutorial
- How to set up an XMPP server using Prosody on a server running Nginx, on Arch. This server will be compatible with at least Conversations and Movim.
- Update: I no longer host this XMPP server as it consumed a lot of resources and I wasn’t using it that much. I’ll probably re-create it in the future, though.
-
Recently I set up an XMPP server (and a Matrix one, too) for my personal use and for friends if they want one; made one for EL ELE EME for example. So, here are the notes on how I set up the server that is compatible with the Conversations app and the Movim social network. You can see my addresses at contact and the XMPP compliance/score of the server.
As with my other entries, this is under a server running Arch Linux, with the Nginx web server and Certbot certificates. And all commands here are executed as root, unless specified otherwise.
Same as with my other entries (website, mail and git) plus:
-
-
A and (optionally) AAA DNS records for:
-
xmpp: the actual XMPP server and the file upload service.
-
muc (or conference): for multi-user chats.
-
pubsub: the publish-subscribe service.
-
proxy: a proxy in case one of the users needs it.
-
vjud: user directory.
-
-
-
(Optionally, but recommended) the following SRV DNS records; make sure it is pointing to an A or AAA record (matching the records from the last point, for example):
-
_xmpp-client._tcp.{your.domain}. for port 5222 pointing to xmpp.{your.domain}.
-
_xmpp-server._tcp.{your.domain}. for port 5269 pointing to xmpp.{your.domain}.
-
_xmpp-server._tcp.muc.{your.domain}. for port 5269 pointing to xmpp.{your.domain}.
-
-
-
SSL certificates for the previous subdomains; similar that with my other entries just create the appropriate prosody.conf (where server_name will be all the subdomains defined above) file and run certbot --nginx. You can find the example configuration file almost at the end of this entry.
-
Email addresses for admin, abuse, contact, security, etc. Or use your own email for all of them, doesn’t really matter much as long as you define them in the configuration and are valid, I have aliases so those emails are forwarded to me.
-
Allow ports 5000, 5222, 5269, 5280 and 5281 for Prosody and, 3478 and 5349 for Turnserver which are the defaults for coturn.
We need mercurial to be able to download and update the extra modules needed to make the server compliant with conversations.im and mov.im. Go to /var/lib/prosody, clone the latest Prosody modules repository and prepare the directories:
-
cd /var/lib/prosody
-hg clone https://hg.prosody.im/prosody-modules modules-available
-mkdir modules-enabled
-
-
You can see that I follow a similar approach that I used with Nginx and the server configuration, where I have all the modules available in a directory, and make a symlink to another to keep track of what is being used. You can update the repository by running hg pull --update while inside the modules-available directory (similar to Git).
And add other modules if needed, but these work for the apps that I mentioned. You should also change the permissions for these files:
-
chown -R prosody:prosody /var/lib/prosody
-
-
Now, configure the server by editing the /etc/prosody/prosody.cfg.lua file. It’s a bit tricky to configure, so here is my configuration file (lines starting with -- are comments). Make sure to change according to your domain, and maybe preferences. Read each line and each comment to know what’s going on, It’s easier to explain it with comments in the file itself than strip it in a lot of pieces.
-
And also, note that the configuration file has a “global” section and a per “virtual server”/”component” section, basically everything above all the VirtualServer/Component sections are global, and bellow each VirtualServer/Component, corresponds to that section.
-
-- important for systemd
-daemonize = true
-pidfile = "/run/prosody/prosody.pid"
-
--- or your account, not that this is an xmpp jid, not email
-admins = { "admin@your.domain" }
-
-contact_info = {
- abuse = { "mailto:abuse@your.domain", "xmpp:abuse@your.domain" };
- admin = { "mailto:admin@your.domain", "xmpp:admin@your.domain" };
- admin = { "mailto:feedback@your.domain", "xmpp:feedback@your.domain" };
- security = { "mailto:security@your.domain" };
- support = { "mailto:support@your.domain", "xmpp:support@muc.your.domain" };
-}
-
--- so prosody look up the plugins we added
-plugin_paths = { "/var/lib/prosody/modules-enabled" }
-
-modules_enabled = {
- -- Generally required
- "roster"; -- Allow users to have a roster. Recommended ;)
- "saslauth"; -- Authentication for clients and servers. Recommended if you want to log in.
- "tls"; -- Add support for secure TLS on c2s/s2s connections
- "dialback"; -- s2s dialback support
- "disco"; -- Service discovery
- -- Not essential, but recommended
- "carbons"; -- Keep multiple clients in sync
- "pep"; -- Enables users to publish their avatar, mood, activity, playing music and more
- "private"; -- Private XML storage (for room bookmarks, etc.)
- "blocklist"; -- Allow users to block communications with other users
- "vcard4"; -- User profiles (stored in PEP)
- "vcard_legacy"; -- Conversion between legacy vCard and PEP Avatar, vcard
- "limits"; -- Enable bandwidth limiting for XMPP connections
- -- Nice to have
- "version"; -- Replies to server version requests
- "uptime"; -- Report how long server has been running
- "time"; -- Let others know the time here on this server
- "ping"; -- Replies to XMPP pings with pongs
- "register"; -- Allow users to register on this server using a client and change passwords
- "mam"; -- Store messages in an archive and allow users to access it
- "csi_simple"; -- Simple Mobile optimizations
- -- Admin interfaces
- "admin_adhoc"; -- Allows administration via an XMPP client that supports ad-hoc commands
- --"admin_telnet"; -- Opens telnet console interface on localhost port 5582
- -- HTTP modules
- "http"; -- Explicitly enable http server.
- "bosh"; -- Enable BOSH clients, aka "Jabber over HTTP"
- "websocket"; -- XMPP over WebSockets
- "http_files"; -- Serve static files from a directory over HTTP
- -- Other specific functionality
- "groups"; -- Shared roster support
- "server_contact_info"; -- Publish contact information for this service
- "announce"; -- Send announcement to all online users
- "welcome"; -- Welcome users who register accounts
- "watchregistrations"; -- Alert admins of registrations
- "motd"; -- Send a message to users when they log in
- --"legacyauth"; -- Legacy authentication. Only used by some old clients and bots.
- --"s2s_bidi"; -- not yet implemented, have to wait for v0.12
- "bookmarks";
- "checkcerts";
- "cloud_notify";
- "csi_battery_saver";
- "default_bookmarks";
- "http_avatar";
- "idlecompat";
- "presence_cache";
- "smacks";
- "strict_https";
- --"pep_vcard_avatar"; -- not compatible with this version of pep, wait for v0.12
- "watchuntrusted";
- "webpresence";
- "external_services";
- }
-
--- only if you want to disable some modules
-modules_disabled = {
- -- "offline"; -- Store offline messages
- -- "c2s"; -- Handle client connections
- -- "s2s"; -- Handle server-to-server connections
- -- "posix"; -- POSIX functionality, sends server to background, enables syslog, etc.
-}
-
-external_services = {
- {
- type = "stun",
- transport = "udp",
- host = "proxy.your.domain",
- port = 3478
- }, {
- type = "turn",
- transport = "udp",
- host = "proxy.your.domain",
- port = 3478,
- -- you could decide this now or come back later when you install coturn
- secret = "YOUR SUPER SECRET TURN PASSWORD"
- }
-}
-
---- general global configuration
-http_ports = { 5280 }
-http_interfaces = { "*", "::" }
-
-https_ports = { 5281 }
-https_interfaces = { "*", "::" }
-
-proxy65_ports = { 5000 }
-proxy65_interfaces = { "*", "::" }
-
-http_default_host = "xmpp.your.domain"
-http_external_url = "https://xmpp.your.domain/"
--- or if you want to have it somewhere else, change this
-https_certificate = "/etc/prosody/certs/xmpp.your.domain.crt"
-
-hsts_header = "max-age=31556952"
-
-cross_domain_bosh = true
---consider_bosh_secure = true
-cross_domain_websocket = true
---consider_websocket_secure = true
-
-trusted_proxies = { "127.0.0.1", "::1", "192.169.1.1" }
-
-pep_max_items = 10000
-
--- this is disabled by default, and I keep it like this, depends on you
---allow_registration = true
-
--- you might want this options as they are
-c2s_require_encryption = true
-s2s_require_encryption = true
-s2s_secure_auth = false
---s2s_insecure_domains = { "insecure.example" }
---s2s_secure_domains = { "jabber.org" }
-
--- where the certificates are stored (/etc/prosody/certs by default)
-certificates = "certs"
-checkcerts_notify = 7 -- ( in days )
-
--- rate limits on connections to the server, these are my personal settings, because by default they were limited to something like 30kb/s
-limits = {
- c2s = {
- rate = "2000kb/s";
- };
- s2sin = {
- rate = "5000kb/s";
- };
- s2sout = {
- rate = "5000kb/s";
- };
-}
-
--- again, this could be yourself, it is a jid
-unlimited_jids = { "admin@your.domain" }
-
-authentication = "internal_hashed"
-
--- if you don't want to use sql, change it to internal and comment the second line
--- since this is optional, i won't describe how to setup mysql or setup the user/database, that would be out of the scope for this entry
-storage = "sql"
-sql = { driver = "MySQL", database = "prosody", username = "prosody", password = "PROSODY USER SECRET PASSWORD", host = "localhost" }
-
-archive_expires_after = "4w" -- configure message archive
-max_archive_query_results = 20;
-mam_smart_enable = true
-default_archive_policy = "roster" -- archive only messages from users who are in your roster
-
--- normally you would like at least one log file of certain level, but I keep all of them, the default is only the info = "*syslog" one
-log = {
- info = "*syslog";
- warn = "prosody.warn";
- error = "prosody.err";
- debug = "prosody.debug";
- -- "*console"; -- Needs daemonize=false
-}
-
--- cloud_notify
-push_notification_with_body = false -- Whether or not to send the message body to remote pubsub node
-push_notification_with_sender = false -- Whether or not to send the message sender to remote pubsub node
-push_max_errors = 5 -- persistent push errors are tolerated before notifications for the identifier in question are disabled
-push_max_devices = 5 -- number of allowed devices per user
-
--- by default every user on this server will join these muc rooms
-default_bookmarks = {
- { jid = "room@muc.your.domain", name = "The Room" };
- { jid = "support@muc.your.domain", name = "Support Room" };
-}
-
--- could be your jid
-untrusted_fail_watchers = { "admin@your.domain" }
-untrusted_fail_notification = "Establishing a secure connection from $from_host to $to_host failed. Certificate hash: $sha1. $errors"
-
------------ Virtual hosts -----------
-VirtualHost "your.domain"
- name = "Prosody"
- http_host = "xmpp.your.domain"
-
-disco_items = {
- { "your.domain", "Prosody" };
- { "muc.your.domain", "MUC Service" };
- { "pubsub.your.domain", "Pubsub Service" };
- { "proxy.your.domain", "SOCKS5 Bytestreams Service" };
- { "vjud.your.domain", "User Directory" };
-}
-
-
--- Multi-user chat
-Component "muc.your.domain" "muc"
- name = "MUC Service"
- modules_enabled = {
- --"bob"; -- not compatible with this version of Prosody
- "muc_limits";
- "muc_mam"; -- message archive in muc, again, a placeholder
- "muc_mam_hints";
- "muc_mention_notifications";
- "vcard_muc";
- }
-
- restrict_room_creation = false
-
- muc_log_by_default = true
- muc_log_presences = false
- log_all_rooms = false
- muc_log_expires_after = "1w"
- muc_log_cleanup_interval = 4 * 60 * 60
-
-
--- Upload
-Component "xmpp.your.domain" "http_upload"
- name = "Upload Service"
- http_host= "xmpp.your.domain"
- -- you might want to change this, these are numbers in bytes, so 10MB and 100MB respectively
- http_upload_file_size_limit = 1024*1024*10
- http_upload_quota = 1024*1024*100
-
-
--- Pubsub
-Component "pubsub.your.domain" "pubsub"
- name = "Pubsub Service"
- pubsub_max_items = 10000
- modules_enabled = {
- "pubsub_feeds";
- "pubsub_text_interface";
- }
-
- -- personally i don't have any feeds configured
- feeds = {
- -- The part before = is used as PubSub node
- --planet_jabber = "http://planet.jabber.org/atom.xml";
- --prosody_blog = "http://blog.prosody.im/feed/atom.xml";
- }
-
-
--- Proxy
-Component "proxy.your.domain" "proxy65"
- name = "SOCKS5 Bytestreams Service"
- proxy65_address = "proxy.your.domain"
-
-
--- Vjud, user directory
-Component "vjud.your.domain" "vjud"
- name = "User Directory"
- vjud_mode = "opt-in"
-
-
You HAVE to read all of the configuration file, because there are a lot of things that you need to change to make it work with your server/domain. Test the configuration file with:
-
luac5.2 -p /etc/prosody/prosody.cfg.lua
-
-
Notice that by default prosody will look up certificates that look like sub.your.domain, but if you get the certificates like I do, you’ll have a single certificate for all subdomains, and by default it is in /etc/letsencrypt/live, which has some strict permissions. So, to import it you can run:
Ignore the complaining about not finding the subdomain certificates and note that you will have to run that command on each certificate renewal, to automate this, add the --deploy-hook flag to your automated Certbot renewal system; for me it’s a systemd timer with the following certbot.service:
That’s basically all the configuration that needs Prosody itself, but we still have to configure Nginx and Coturn before starting/enabling the prosody service.
Since this is not an ordinary configuration file I’m going to describe this, too. Your prosody.conf file should have the following location blocks under the main server block (the one that listens to HTTPS):
And you will need the following host-meta and host-meta.json files inside the .well-known/acme-challenge directory for your.domain (following my nomenclature: /var/www/yourdomaindir/.well-known/acme-challenge/).
Remember to have your prosody.conf file symlinked (or discoverable by Nginx) to the sites-enabled directory. You can now test and restart your nginx service (and test the configuration, optionally):
Coturn is the implementation of TURN and STUN server, which in general is for (at least in the XMPP world) voice support and external service discovery.
-
Install the coturn package:
-
pacman -S coturn
-
-
You can modify the configuration file (located at /etc/turnserver/turnserver.conf) as desired, but at least you need to make the following changes (uncomment or edit):
-
use-auth-secret
-realm=proxy.your.domain
-static-auth-secret=YOUR SUPER SECRET TURN PASSWORD
-
-
I’m sure there is more configuration to be made, like using SQL to store data and whatnot, but for now this is enough for me. Note that you may not have some functionality that’s needed to create dynamic users to use the TURN server, and to be honest I haven’t tested this since I don’t use this feature in my XMPP clients, but if it doesn’t work, or you know of an error or missing configuration don’t hesitate to contact me.
And you can add your first user with the prosodyctl command (it will prompt you to add a password):
-
prosodyctl adduser user@your.domain
-
-
You may want to add a compliance user, so you can check if your server is set up correctly. To do so, go to XMPP Compliance Tester and enter the compliance user credentials. It should have similar compliance score to mine:
-
-
Additionally, you can test the security of your server in IM Observatory, here you only need to specify your domain.name (not xmpp.domain.name, if you set up the SRV DNS records correctly). Again, it should have a similar score to mine:
-
-
You can now log in into your XMPP client of choice, if it asks for the server it should be xmpp.your.domain (or your.domain for some clients) and your login credentials you@your.domain and the password you chose (which you can change in most clients).
-
That’s it, send me a message at david@luevano.xyz if you were able to set up the server successfully.
]]>
-
-
- Al fin ya me acomodé la página pa' los dibujos
- https://blog.luevano.xyz/a/acomodada_la_pagina_de_arte.html
- https://blog.luevano.xyz/a/acomodada_la_pagina_de_arte.html
- Sun, 06 Jun 2021 19:06:09 GMT
- Short
- Spanish
- Update
- Actualización en el estado de la página, en este caso sobre la existencia de una nueva página para los dibujos y arte en general.
- Así es, ya quedó acomodado el sub-dominio art.luevano.xyz pos pal arte veda. Entonces pues ando feliz por eso.
-
Este pedo fue gracias a que me reescribí la forma en la que pyssg maneja los templates, ahora uso el sistema de jinja en vez del cochinero que hacía antes.
]]>
-
-
- Así nomás está quedando el página
- https://blog.luevano.xyz/a/asi_nomas_esta_quedando.html
- https://blog.luevano.xyz/a/asi_nomas_esta_quedando.html
- Fri, 04 Jun 2021 08:24:03 GMT
- Short
- Spanish
- Update
- Actualización en el estado de la página, el servidor de XMPP y Matrix que me acomodé y próximas cosas que quiero hacer.
- Estuve acomodando un poco más el sItIo, al fin agregué la “sección” de contact y de donate por si hay algún loco que quiere tirar varo.
-
También me puse a acomodar un servidor de XMPP el cual, en pocas palabras, es un protocolo de mensajería instantánea (y más) descentralizado, por lo cual cada quien puede hacer una cuenta en el servidor que quiera y conectarse con cuentas creadas en otro servidor… exacto, como con los correos electrónicos. Y esto está perro porque si tú tienes tu propio server, así como con uno de correo electrónico, puedes controlar qué características tiene, quiénes pueden hacer cuenta, si hay end-to-end encryption (o mínimo end-to-server), entre un montón de otras cosas.
-
Ahorita este server es SUMISO (compliant en español, jeje) para jalar con la app conversations y con la red social movim, pero realmente funcionaría con casi cualquier cliente de XMPP, amenos que ese cliente implemente algo que no tiene mi server. Y también acomodé un server de Matrix que es muy similar pero es bajo otro protocolo y se siente más como un discord/slack (al menos en el element), muy chingón también.
-
Si bien aún quedan cosas por hacer sobre estos dos servers que me acomodé (además de hacerles unas entradas para documentar cómo lo hice), quiero moverme a otra cosa que sería acomodar una sección de dibujos, lo cual en teoría es bien sencillo, pero como quiero poder automatizar la publicación de estos, quiero modificar un poco el pyssg para que jale chido para este pex.
-
Ya por último también quiero moverle un poco al CSS, porque lo dejé en un estado muy culerón y quiero meterle/ajustar unas cosas para que quede más limpio y medianamente bonito… dentro de lo que cabe porque evidentemente me vale verga si se ve como una página del 2000.
-
Actualización: Ya tumbé el servidor de XMPP porque consumía bastantes recursos y no lo usaba tanto, si en un futuro consigo un mejor servidor podría volver a hostearlo.
]]>
-
-
- I'm using a new blogging system
- https://blog.luevano.xyz/a/new_blogging_system.html
- https://blog.luevano.xyz/a/new_blogging_system.html
- Fri, 28 May 2021 03:21:39 GMT
- English
- Short
- Tools
- Update
- I created a new blogging system called pyssg, which is based on what I was using but, to be honest, better.
- So, I was tired of working with ssg (and then sbg which was a modified version of ssg that I “wrote”), for one general reason: not being able to extend it as I would like; and not just dumb little stuff, I wanted to be able to have more control, to add tags (which another tool that I found does: blogit), and even more in a future.
-
The solution? Write a new program “from scratch” in pYtHoN. Yes it is bloated, yes it is in its early stages, but it works just as I want it to work, and I’m pretty happy so far with the results and have with even more ideas in mind to “optimize” and generally clean my wOrKfLoW to post new blog entries. I even thought of using it for posting into a “feed” like gallery for drawings or pictures in general.
-
I called it pyssg, because it sounds nice and it wasn’t taken in the PyPi. It is just a terminal program that reads either a configuration file or the options passed as flags when calling the program.
-
It still uses Markdown files because I find them very easy to work with. And instead of just having a “header” and a “footer” applied to each parsed entry, you will have templates (generated with the program) for each piece that I thought made sense (idea taken from blogit): the common header and footer, the common header and footer for each entry and, header, footer and list elements for articles and tags. When parsing the Markdown file these templates are applied and stitched together to make a single HTML file. Also generates an RSS feed and the sitemap.xml file, which is nice.
-
It might sound convoluted, but it works pretty well, with of course room to improve; I’m open to suggestions, issue reporting or direct contributions here. For now, it is only tested on Linux (and don’t think on making it work on windows, but feel free to do PR for the compatibility).
Update: Since writing this entry, pyssg has evolved quite a bit, so not everything described here is still true. For the latest updates check the newest entries or the git repository itself.
]]>
-
-
- Set up a Git server and cgit front-end
- https://blog.luevano.xyz/a/git_server_with_cgit.html
- https://blog.luevano.xyz/a/git_server_with_cgit.html
- Sun, 21 Mar 2021 19:00:29 GMT
- Code
- English
- Server
- Tools
- Tutorial
- How to create a Git server using cgit on a server running Nginx, on Arch. This is a follow up on post about creating a website with Nginx and Certbot.
- My git server is all I need to setup to actually kill my other server (I’ve been moving from servers on these last 2-3 blog entries), that’s why I’m already doing this entry. I’m basically following git’s guide on setting up a server plus some specific stuff for btw i use Arch Linux (Arch Linux Wiki: Git server and Step by step guide on setting up git server in arch linux (pushable)).
-
Note that this is mostly for personal use, so there’s no user/authentication control other than that of normal ssh. And as with the other entries, most if not all commands here are run as root unless stated otherwise.
I might get tired of saying this (it’s just copy paste, basically)… but you will need the same prerequisites as before (check my website and mail entries), with the extras:
-
-
(Optional, if you want a “front-end”) A CNAME for “git” and (optionally) “www.git”, or some other name for your sub-domains.
-
An SSL certificate, if you’re following the other entries, add a git.conf and run certbot --nginx to extend the certificate.
If not installed already, install the git package:
-
pacman -S git
-
-
On Arch Linux, when you install the git package, a git user is automatically created, so all you have to do is decide where you want to store the repositories, for me, I like them to be on /home/git like if git was a “normal” user. So, create the git folder (with corresponding permissions) under /home and set the git user’s home to /home/git:
Also, the git user is “expired” by default and will be locked (needs a password), change that with:
-
chage -E -1 git
-passwd git
-
-
Give it a strong one and remember to use PasswordAuthentication no for ssh (as you should). Create the .ssh/authorized_keys for the git user and set the permissions accordingly:
Now is a good idea to copy over your local SSH public keys to this file, to be able to push/pull to the repositories. Do it by either manually copying it or using ssh‘s built in ssh-copy-id (for that you may want to check your ssh configuration in case you don’t let people access your server with user/password).
-
Next, and almost finally, we need to edit the git-daemon service, located at /usr/lib/systemd/system/ (called git-daemon@.service):
I just appended --enable=receive-pack and note that I also changed the --base-path to reflect where I want to serve my repositories from (has to match what you set when changing git user’s home).
-
Now, go ahead and start and enable the git-daemon socket:
You’re basically done. Now you should be able to push/pull repositories to your server… except, you haven’t created any repository in your server, that’s right, they’re not created automatically when trying to push. To do so, you have to run (while inside /home/git):
Those two lines above will need to be run each time you want to add a new repository to your server. There are options to “automate” this but I like it this way.
-
After that you can already push/pull to your repository. I have my repositories (locally) set up so I can push to more than one remote at the same time (my server, GitHub, GitLab, etc.); to do so, check this gist.
Where the server_name line depends on you, I have mine setup to git.luevano.xyz and www.git.luevano.xyz. Optionally run certbot --nginx to get a certificate for those domains if you don’t have already.
-
Now, all that’s left is to configure cgit. Create the configuration file /etc/cgitrc with the following content (my personal options, pretty much the default):
Where you can uncomment the robots line to not let web crawlers (like Google’s) to index your git web app. And at the end keep all your repositories (the ones you want to make public), for example for my dotfiles I have:
-
...
-repo.url=.dots
-repo.path=/home/git/.dots.git
-repo.owner=luevano
-repo.desc=These are my personal dotfiles.
-...
-
-
Otherwise you could let cgit to automatically detect your repositories (you have to be careful if you want to keep “private” repos) using the option scan-path and setup .git/description for each repository. For more, you can check cgitrc(5).
And edit it to use the version 3 and add --inline-css for more options without editing cgit‘s CSS file:
-
...
-# This is for version 2
-# exec highlight --force -f -I -X -S "$EXTENSION" 2>/dev/null
-
-# This is for version 3
-exec highlight --force --inline-css -f -I -O xhtml -S "$EXTENSION" 2>/dev/null
-...
-
-
Finally, enable the filter in /etc/cgitrc configuration:
That would be everything. If you need support for more stuff like compressed snapshots or support for markdown, check the optional dependencies for cgit.
]]>
-
-
- Set up a Mail server with Postfix, Dovecot, SpamAssassin and OpenDKIM
- https://blog.luevano.xyz/a/mail_server_with_postfix.html
- https://blog.luevano.xyz/a/mail_server_with_postfix.html
- Sun, 21 Mar 2021 04:05:59 GMT
- Code
- English
- Server
- Tools
- Tutorial
- How to set up a Mail server using Postfix, Dovecot, SpamAssassin and OpenDKIM, on Arch. This is a follow up on post about creating a website with Nginx and Certbot.
- The entry is going to be long because it’s a tedious process. This is also based on Luke Smith’s script, but adapted to Arch Linux (his script works on debian-based distributions). This entry is mostly so I can record all the notes required while I’m in the process of installing/configuring the mail server on a new VPS of mine; also I’m going to be writing a script that does everything in one go (for Arch Linux), that will be hosted here. I haven’t had time to do the script so nevermind this, if I ever do it I’ll make a new entry regarding it.
-
This configuration works for local users (users that appear in /etc/passwd), and does not use any type of SQL database. Do note that I’m not running Postfix in a chroot, which can be a problem if you’re following my steps as noted by Bojan; in the case that you want to run in chroot then add the steps chown in the Arch wiki: Postfix in a chroot jail; the issue faced if following my steps and using a chroot is that there will be issues resolving the hostname due to /etc/hosts or /etc/hostname not being available in the chroot.
-
All commands executed here are run with root privileges, unless stated otherwise.
You will need a CNAME for “mail” and (optionally) “www.mail”, or whatever you want to call the sub-domains (although the RFC 2181 states that it NEEDS to be an A record, fuck the police).
-
An SSL certificate. You can use the SSL certificate obtained following my last post using certbot (just create a mail.conf and run certbot --nginx again).
-
Ports 25, 587 (SMTP), 465 (SMTPS), 143 (IMAP) and 993 (IMAPS) open on the firewall (I use ufw).
Postfix is a “mail transfer agent” which is the component of the mail server that receives and sends emails via SMTP.
-
Install the postfix package:
-
pacman -S postfix
-
-
We have two main files to configure (inside /etc/postfix): master.cf (master(5)) and main.cf (postconf(5)). We’re going to edit main.cf first either by using the command postconf -e 'setting' or by editing the file itself (I prefer to edit the file).
-
Note that the default file itself has a lot of comments with description on what each thing does (or you can look up the manual, linked above), I used what Luke’s script did plus some other settings that worked for me.
-
Now, first locate where your website cert is, mine is at the default location /etc/letsencrypt/live/, so my certdir is /etc/letsencrypt/live/luevano.xyz. Given this information, change {yourcertdir} on the corresponding lines. The configuration described below has to be appended in the main.cf configuration file.
-
Certificates and ciphers to use for authentication and security:
Specify the mailbox home, this is going to be a directory inside your user’s home containing the actual mail files, for example it will end up being/home/david/Mail/Inbox:
-
home_mailbox = Mail/Inbox/
-
-
Pre-configuration to work seamlessly with dovecot and opendkim:
Where {yourdomainname} is luevano.xyz in my case. Lastly, if you don’t want the sender’s IP and user agent (application used to send the mail), add the following line:
That’s it for main.cf, now we have to configure master.cf. This one is a bit more tricky.
-
First look up lines (they’re uncommented) smtp inet n - n - - smtpd, smtp unix - - n - - smtp and -o syslog_name=postfix/$service_name and either delete or uncomment them… or just run sed -i "/^\s*-o/d;/^\s*submission/d;/\s*smtp/d" /etc/postfix/master.cf as stated in Luke’s script.
-
Lastly, append the following lines to complete postfix setup and pre-configure for spamassassin.
-
smtp unix - - n - - smtp
-smtp inet n - y - - smtpd
- -o content_filter=spamassassin
-submission inet n - y - - smtpd
- -o syslog_name=postfix/submission
- -o smtpd_tls_security_level=encrypt
- -o smtpd_sasl_auth_enable=yes
- -o smtpd_tls_auth_only=yes
-smtps inet n - y - - smtpd
- -o syslog_name=postfix/smtps
- -o smtpd_tls_wrappermode=yes
- -o smtpd_sasl_auth_enable=yes
-spamassassin unix - n n - - pipe
- user=spamd argv=/usr/bin/vendor_perl/spamc -f -e /usr/sbin/sendmail -oi -f \${sender} \${recipient}
-
Before starting the postfix service, you need to run newaliases first, but you can do a bit of configuration beforehand editing the file /etc/postfix/aliases. I only change the root: you line (where you is the account that will be receiving “root” mail). After you’re done, run:
-
postalias /etc/postfix/aliases
-newaliases
-
-
At this point you’re done configuring postfix and you can already start/enable the postfix service:
Dovecot is an IMAP and POP3 server, which is what lets an email application retrieve the mail.
-
Install the dovecot and pigeonhole (sieve for dovecot) packages:
-
pacman -S dovecot pigeonhole
-
-
On arch, by default, there is no /etc/dovecot directory with default configurations set in place, but the package does provide the example configuration files. Create the dovecot directory under /etc and, optionally, copy the dovecot.conf file and conf.d directory under the just created dovecot directory:
As Luke stated, dovecot comes with a lot of “modules” (under /etc/dovecot/conf.d/ if you copied that folder) for all sorts of configurations that you can include, but I do as he does and just edit/create the whole dovecot.conf file; although, I would like to check each of the separate configuration files dovecot provides I think the options Luke provides are more than good enough.
-
I’m working with an empty dovecot.conf file. Add the following lines for SSL and login configuration (also replace {yourcertdir} with the same certificate directory described in the Postfix section above, note that the < is required):
You may notice we specify a file we don’t have under /etc/dovecot: dh.pem. We need to create it with openssl (you should already have it installed if you’ve been following this entry and the one for nginx). Just run (might take a few minutes):
-
openssl dhparam -out /etc/dovecot/dh.pem 4096
-
-
After that, the next lines define what a “valid user is” (really just sets the database for users and passwords to be the local users with their password):
Next, comes the mail directory structure (has to match the one described in the Postfix section). Here, the LAYOUT option is important so the boxes are .Sent instead of Sent. Add the next lines (plus any you like):
Where you need to change {yourdomain} and {yoursubdomain} (doesn’t really need to be the sub-domain, could be anything that describes your key) accordingly, for me it’s luevano.xyz and mail, respectively. After that, we need to create some files inside the /etc/opendkim directory. First, create the file KeyTable with the content:
And more, make sure to include your server IP and something like subdomain.domainname.
-
Next, edit /etc/opendkim/opendkim.conf to reflect the changes (or rather, addition) of these files, as well as some other configuration. You can look up the example configuration file located at /usr/share/doc/opendkim/opendkim.conf.sample, but I’m creating a blank one with the contents:
I’m using root:opendkim so opendkim doesn’t complain about the {yoursubdomani}.private being insecure (you can change that by using the option RequireSafeKeys False in the opendkim.conf file, as stated here).
-
That’s it for the general configuration, but you could go more in depth and be more secure with some extra configuration.
Add the following TXT records on your domain registrar (these examples are for Epik):
-
-
DKIM entry: look up your {yoursubdomain}.txt file, it should look something like:
-
-
{yoursubdomain}._domainkey IN TXT ( "v=DKIM1; k=rsa; s=email; "
- "p=..."
- "..." ) ; ----- DKIM key mail for {yourdomain}
-
-
In the TXT record you will place {yoursubdomain}._domainkey as the “Host” and "v=DKIM1; k=rsa; s=email; " "p=..." "..." in the “TXT Value” (replace the dots with the actual value you see in your file).
-
-
-
DMARC entry: just _dmarc.{yourdomain} as the “Host” and "v=DMARC1; p=reject; rua=mailto:dmarc@{yourdomain}; fo=1" as the “TXT Value”.
-
-
-
SPF entry: just @ as the “Host” and "v=spf1 mx a:{yoursubdomain}.{yourdomain} - all" as the “TXT Value”.
-
-
-
And at this point you could test your mail for spoofing and more.
Then, you can edit local.cf (located in /etc/mail/spamassassin) to fit your needs (I only uncommented the rewrite_header Subject ... line). And then you can run the following command to update the patterns and compile them:
And you could also execute sa-learn to train spamassassin‘s bayes filter, but this works for me. Then create the timer spamassassin-update.timer under the same directory, with the content:
Next, you may want to edit the spamassassin service before starting and enabling it, because by default, it could spawn a lot of “childs” eating a lot of resources and you really only need one child. Append --max-children=1 to the line ExecStart=... in /usr/bin/systemd/system/spamassassin.service:
We should have a working mail server by now. Before continuing check your journal logs (journalctl -xe --unit={unit}, where {unit} could be spamassassin.service for example) to see if there was any error whatsoever and try to debug it, it should be a typo somewhere because all the settings and steps detailed here just worked; I literally just finished doing everything on a new server as of the writing of this text, it just werks on my machine.
-
Now, to actually use the mail service: first of all, you need a normal account (don’t use root) that belongs to the mail group (gpasswd -a user group to add a user user to group group) and that has a password.
-
Next, to actually login into a mail app/program, you will use the following settings, at least for thunderdbird(I tested in windows default mail app and you don’t need a lot of settings):
-
-
* server: subdomain.domain (mail.luevano.xyz in my case)
-
SMTP port: 587
-
SMTPS port: 465 (I use this one)
-
IMAP port: 143
-
IMAPS port: 993 (again, I use this one)
-
Connection/security: SSL/TLS
-
Authentication method: Normal password
-
Username: just your user, not the whole email (david in my case)
-
Password: your user password (as in the password you use to login to the server with that user)
-
-
All that’s left to do is test your mail server for spoofing, and to see if everything is setup correctly. Go to DKIM Test and follow the instructions (basically click next, and send an email with whatever content to the email that they provide). After you send the email, you should see something like:
-]]>
-
-
- Set up a website with Nginx and Certbot
- https://blog.luevano.xyz/a/website_with_nginx.html
- https://blog.luevano.xyz/a/website_with_nginx.html
- Fri, 19 Mar 2021 02:58:15 GMT
- Code
- English
- Server
- Tools
- Tutorial
- How to set up a website using Nginx for web server and Certbot for SSL certificates, on Arch. This is a base for future blog posts about similar topics.
- These are general notes on how to setup a Nginx web server plus Certbot for SSL certificates, initially learned from Luke’s video and after some use and research I added more stuff to the mix. And, actually at the time of writing this entry, I’m configuring the web server again on a new VPS instance, so this is going to be fresh.
-
As a side note, i use arch btw so everything here es aimed at an Arch Linux distro, and I’m doing everything on a VPS. Also note that most if not all commands here are executed with root privileges.
A domain name (duh!). I got mine on Epik (affiliate link, btw).
-
With the corresponding A and AAA records pointing to the VPS’ IPs. I have three records for each type: empty string, “www” and “*” for a wildcard, that way “domain.name”, “www.domain.name”, “anythingelse.domain.name” point to the same VPS (meaning that you can have several VPS for different sub-domains). These depend on the VPS provider.
-
-
-
A VPS or somewhere else to host it. I’m using Vultr (also an affiliate link, btw).
-
With ssh already configured both on the local machine and on the remote machine.
-
Firewall already configured to allow ports 80 (HTTP) and 443 (HTTPS). I use ufw so it’s just a matter of doing ufw allow 80,443/tcp (for example) as root and you’re golden.
-
cron installed if you follow along (you could use systemd timers, or some other method you prefer to automate running commands every certain time).
Nginx is a web (HTTP) server and reverse proxy server.
-
You have two options: nginx and nginx-mainline. I prefer nginx-mainline because it’s the “up to date” package even though nginx is labeled to be the “stable” version. Install the package and enable/start the service:
And that’s it, at this point you can already look at the default initial page of Nginx if you enter the IP of your server in a web browser. You should see something like this:
-
-
As stated in the welcome page, configuration is needed, head to the directory of Nginx:
-
cd /etc/nginx
-
-
Here you have several files, the important one is nginx.conf, which as its name implies, contains general configuration of the web server. If you peek into the file, you will see that it contains around 120 lines, most of which are commented out and contains the welcome page server block. While you can configure a website in this file, it’s common practice to do it on a separate file (so you can scale really easily if needed for mor websites or sub-domains).
-
Inside the nginx.conf file, delete the server blocks and add the lines include sites-enabled/*; (to look into individual server configuration files) and types_hash_max_size 4096; (to get rid of an ugly warning that will keep appearing) somewhere inside the http block. The final nginx.conf file would look something like (ignoring the comments just for clarity, but you can keep them as side notes):
That could serve as a template if you intend to add more domains.
-
Note some things:
-
-
listen: we’re telling Nginx which port to listen to (IPv4 and IPv6, respectively).
-
root: the root directory of where the website files (.html, .css, .js, etc. files) are located. I followed Luke’s directory path /var/www/some_folder.
-
server_name: the actual domain to “listen” to (for my website it is: server_name luevano.xyz www.luevano.xyz; and for this blog is: server_name blog.luevano.xyz www.blog.luevano.xyz;).
-
index: what file to serve as the index (could be any .html, .htm, .php, etc. file) when just entering the website.
-
location: what goes after domain.name, used in case of different configurations depending on the URL paths (deny access on /private, make a proxy on /proxy, etc).
-
try_files: tells what files to look for.
-
-
-
-
Then, make a symbolic link from this configuration file to the sites-enabled directory:
This is so the nginx.conf file can look up the newly created server configuration. With this method of having each server configuration file separate you can easily “deactivate” any website by just deleting the symbolic link in sites-enabled and you’re good, or just add new configuration files and keep everything nice and tidy.
-
All you have to do now is restart (or enable and start if you haven’t already) the Nginx service (and optionally test the configuration):
-
nginx -t
-systemctl restart nginx
-
-
If everything goes correctly, you can now go to your website by typing domain.name on a web browser. But you will see a “404 Not Found” page like the following (maybe with different Nginx version):
-
-
That’s no problem, because it means that the web server it’s actually working. Just add an index.html file with something simple to see it in action (in the /var/www/some_folder that you decided upon). If you keep seeing the 404 page make sure your root line is correct and that the directory/index file exists.
The only “bad” (bloated) thing about Certbot, is that it uses python, but for me it doesn’t matter too much. You may want to look up another alternative if you prefer. Install the packages certbot and certbot-nginx:
-
pacman -S certbot certbot-nginx
-
-
After that, all you have to do now is run certbot and follow the instructions given by the tool:
-
certbot --nginx
-
-
It will ask you for some information, for you to accept some agreements and the names to activate HTTPS for. Also, you will want to “say yes” to the redirection from HTTP to HTTPS. And that’s it, you can now go to your website and see that you have HTTPS active.
-
Now, the certificate given by certbot expires every 3 months or something like that, so you want to renew this certificate every once in a while. I did this before using cron or manually creating a systemd timer and service, but now it’s just a matter of enabling the certbot-renew.timer:
-
systemctl start certbot-renew.timer
-
-
The deploy-hook is not needed anymore, only for plugins. For more, visit the Arch Linux Wiki.
]]>
-
-
- Así es raza, el blog ya tiene timestamps
- https://blog.luevano.xyz/a/el_blog_ya_tiene_timestamps.html
- https://blog.luevano.xyz/a/el_blog_ya_tiene_timestamps.html
- Tue, 16 Mar 2021 02:46:24 GMT
- Short
- Spanish
- Tools
- Update
- Actualización en el estado del blog y el sistema usado para crearlo.
- Pues eso, esta entrada es sólo para tirar update sobre mi primer post. Ya modifiqué el ssg lo suficiente como para que maneje los timestamps, y ya estoy más familiarizado con este script entonces ya lo podré extender más, pero por ahora las entradas ya tienen su fecha de creación (y modificación en dado caso) al final y en el índice ya están organizados por fecha, que por ahora está algo simple pero está sencillo de extender.
-
Ya lo único que queda es cambiar un poco el formato del blog (y de la página en general), porque en un momento de desesperación puse todo el texto en justificado y pues no se ve chido siempre, entonces queda corregir eso. Y aunque me tomó más tiempo del que quisiera, así nomás quedó, diría un cierto personaje.
-
El ssg modificado está en mis dotfiles (o directamente aquí).
-Como al final ya no usé el ssg modificado, este pex ya no existe.
-
Por último, también quité las extensiones .html de las URLs, porque se ve bien pitero, pero igual los links con .html al final redirigen a su link sin .html, así que no hay rollo alguno.
-
Actualización: Ahora estoy usando mi propia solución en vez de ssg, que la llamé pyssg, de la cual empiezo a hablar acá.
]]>
-
-
- This is the first blog post, just for testing purposes
- https://blog.luevano.xyz/a/first_blog_post.html
- https://blog.luevano.xyz/a/first_blog_post.html
- Sat, 27 Feb 2021 13:08:33 GMT
- English
- Short
- Tools
- Update
- Just my first blog post where I state what tools I'm using to build this blog.
- I’m making this post just to figure out how ssg5 and lowdown are supposed to work, and eventually rssg.
-
At the moment I’m not satisfied because there’s no automatic date insertion into the 1) html file, 2) the blog post itself and 3) the listing system in the blog homepage which also has a problem with the ordering of the entries. And all of this just because I didn’t want to use Luke’s lb solution as I don’t really like that much how he handles the scripts (but they just work).
-
Hopefully, for tomorrow all of this will be sorted out and I’ll have a working blog system.
-
Update: I’m now using my own solution which I called pyssg, of which I talk about here.