From 0ed2a2071fe5b7610d4e4122096129857359b651 Mon Sep 17 00:00:00 2001 From: David Luevano Alvarado Date: Sun, 11 Jun 2023 04:34:59 -0600 Subject: update manga server entry --- live/blog/a/manga_server_with_komga.html | 85 ++++++++++++++++++++++---------- live/blog/rss.xml | 84 +++++++++++++++++++++---------- live/blog/sitemap.xml | 2 +- 3 files changed, 118 insertions(+), 53 deletions(-) (limited to 'live/blog') diff --git a/live/blog/a/manga_server_with_komga.html b/live/blog/a/manga_server_with_komga.html index e264374..182956d 100644 --- a/live/blog/a/manga_server_with_komga.html +++ b/live/blog/a/manga_server_with_komga.html @@ -159,7 +159,7 @@ sudo rm -r yay

This komga package creates a komga (service) user and group which is tied to the also included komga.service.

Configure it by editing /etc/komga.conf:

-
SERVER_PORT=8989
+
SERVER_PORT=8989
 SERVER_SERVLET_CONTEXT_PATH=/ # this depends a lot of how it's going to be served (domain, subdomain, ip, etc)
 
 KOMGA_LIBRARIES_SCAN_CRON="0 0 * * * ?"
@@ -176,14 +176,14 @@ KOMGA_DATABASE_BACKUP_SCHEDULE="0 0 */8 * * ?"
 

My changes (shown above):

  • Port on 8989 because 8080 its too generic.
  • -
  • cron schedules -

    If you’re going to run it locally (or LAN/VPN) you can start the komga.service and access it via IP at http://<your-server-ip>:<port>(/base_url) as stated at Komga: Accessing the web interface, else continue with the next steps for the reverse proxy and certificate.

    +

    If you’re going to run it locally (or LAN/VPN) you can start the komga.service and access it via IP at http://<your-server-ip>:<port>(/base_url) as stated at Komga: Accessing the web interface, then you can continue with the mangal section, else continue with the next steps for the reverse proxy and certificate.

    Reverse proxy

    Create the reverse proxy configuration (this is for nginx). In my case I’ll use a subdomain, so this is a new config called komga.conf at the usual sites-available/enabled path:

    server {
    @@ -203,7 +203,7 @@ KOMGA_DATABASE_BACKUP_SCHEDULE="0 0 */8 * * ?"
         }
     }
     
    -

    If it’s going to be used as a subdir on another domain then just change the location (with /subdir instead of /) directive to the corresponding .conf file; be careful with the proxy_pass directive, it has to match what you configured at /etc/komga.conf for the SERVER_SERVLET_CONTEXT_PATH regardless of the /subdir you selected at location.

    +

    If it’s going to be used as a subdir on another domain then just change the location with /subdir instead of /; be careful with the proxy_pass directive, it has to match what you configured at /etc/komga.conf for the SERVER_SERVLET_CONTEXT_PATH regardless of the /subdir you selected at location.

    SSL certificate

    If using a subdir then the same certificate for the subdomain/domain should work fine and no extra stuff is needed, else if following along me then we can create/extend the certificate by running:

    certbot --nginx
    @@ -259,12 +259,12 @@ default:other::r-x
     

    So instad of installing with yay we’ll build it from source. We need to have go installed:

    pacman -S go
     
    -

    Then clone my fork of mangal and build/install it:

    +

    Then clone my fork of mangal and build/install it:

    git clone https://github.com/luevano/mangal.git # not sure if you can use SSH to clone
     cd mangal
     make install # or just `make build` and then move the binary to somewhere in your $PATH
     
    -

    This will use go install so it will install to a path specified by your environment variables, for more run go help install. It was installed to $HOME/.local/bin/go/mangal for me, then just make sure this is included in your PATH.

    +

    This will use go install so it will install to a path specified by the go environment variables, for more run go help install. It was installed to $HOME/.local/bin/go/mangal for me because my env vars, then just make sure this is included in your PATH.

    Check it was correctly installed by running mangal version, which should print something like:

    ▇▇▇ mangal
     
    @@ -292,11 +292,11 @@ mangal config set -k logs.write -v true # I like to get logs for what happens
     

    Usage

    Two main ways of using mangal:

      -
    • TUI: for initial browsing/downloading and testing things out. If the manga finished publishing, this should be enough.
    • -
    • inline: for automation on manga that is still publishing and I need to check/download every once in a while.
    • +
    • TUI: for initial browsing/downloading and testing things out. If the manga finished publishing, this should be enough.
    • +
    • inline: for automation on manga that is still publishing and I need to check/download every once in a while.

    Headless browser

    -

    Before continuing, I gotta say I went through some bullshit while trying to use the custom Lua scrapers that use the headless browser (actually just a wrapper of go-rod/rod, and honestly it is not really a “headless” browser, mangal “documentation” is just wrong). For mor on my rant check out my last entry.

    +

    Before continuing, I gotta say I went through some bullshit while trying to use the custom Lua scrapers that use the headless browser (actually just a wrapper of go-rod/rod, and honestly it is not really a “headless” browser, mangal “documentation” is just wrong). For more on my rant check out my last entry.

    There is no concrete documentation on the “headless” browser, only that it is automatically set up and ready to use… but it doesn’t install any library/dependency needed. I discovered the following libraries that were missing on my Arch minimal install:

    • library -> arch package containing it
    • @@ -322,18 +322,30 @@ mangal config set -k logs.write -v true # I like to get logs for what happens
      mangal
       

      Download manga using the TUI by selecting the source/scrapper, search the manga/comic you want and then you can select each chapter to download (use tab to select all). This is what I use when downloading manga that already finished publishing, or when I’m just searching and testing out how it downloads the manga (directory name, and manga information).

      -

      Note that some scrapters will contain duplicated chapters, as they have uploaded chapters from the community. This happens a lot with MangaDex.

      +

      Note that some scrapters will contain duplicated chapters, as they have multiple uploaded chapters from the community, usually for different scanlation groups. This happens a lot with MangaDex.

      Inline

      The inline mode is a single terminal command meant to be used to automate stuff or for more advanced options. You can peek a bit into the “documentation” which honestly its ass because it doesn’t explain much. The minimal command for inline according to the help is:

      mangal inline --manga <option> --query <manga-title>
       
      -

      But this will not produce anything because it also needs --source (or set the default using the config key downloader.default_sources) and either --json (for the search result) or --download to actually download whatever was found but it could download something you don’t want so do the --json first.

      +

      But this will not produce anything because it also needs --source (or set the default using the config key downloader.default_sources) and either --json which basically just does the search and returns the result in json format or --download to actually download whatever is found; I recommend to do --json first to check that the correct manga will be downloaded then do --download.

      Something not mentioned anywhere is the --manga flag options (found it at the source code), it has 3 available options:

      • first: first manga entry found for the search.
      • last: last manga entry found for the search.
      • exact: exact manga title match. This is the one I use.
      +

      Similar to --chapters, there are a few options not explained (that I found at the source code, too). I usually just use all but other options:

      +
        +
      • all: all chapters found in the chapter list.
      • +
      • first: first chapter found in the chapter list.
      • +
      • last: last chapter found in the chapter list
      • +
      • [from]-[to]: selector for the chapters found in the chapter list, index starts at 0.
          +
        • If the selectors (from or to) exceed the amount of chapters in the chapterlist it just adjusts to he maximum available.
        • +
        • I had to fix this at the source code because if you wanted to to be the last chapter, it did to + 1 and it failed due to index out of range.
        • +
        +
      • +
      • @[sub]@: not sure how this works exactly, my understanding is that it’s for “named” chapters.
      • +

      That said, I’ll do an example by using Mangapill as source, and will search for Demon Slayer: Kimetsu no Yaiba:

      1. Search first and make sure my command will pull the manga I want:
      2. @@ -341,16 +353,16 @@ mangal config set -k logs.write -v true # I like to get logs for what happens
        mangal inline --source "Mangapill" --manga "exact" --query "Kimetsu no Yaiba" --json | jq # I use jq to pretty format the output
         
          -
        1. -

          I make sure the json output contains the correct manga information: name, url, etc..

          +
        2. I make sure the json output contains the correct manga information: name, url, etc..
        3. +
        • You can also include the flag --include-anilist-manga to include anilist information (if any) so you can check that the correct anilist id is attached. If the correct one is not attached (and it exists) then you can run the command:
        -

        sh -mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922

        -

        Which means that all “searches” for that --name flag will be attached to that specific anilist ID. -3. If I’m okay with the outputs, then I change --json for --download to actually download:

        - +
        mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922
        +
        +

        Which means that all “searches” for that --name flag will be attached to that specific anilist ID.

        +
          +
        1. If I’m okay with the outputs, then I change --json for --download to actually download:
        mangal inline --source "Mangapill" --manga "exact" --query "Kimetsu no Yaiba" --download
         
        @@ -358,26 +370,26 @@ mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922

      3. Check if the manga is downloaded correctly. I do this by going to my download directory and checking the directory name (I’m picky with this stuff), that all chapters where downloaded, that it includes a correct series.json file and it contains a cover.<img-ext>; this usually means it correctly pulled information from anilist and that it will contain metadata Komga will be able to use.

      Komga library

      -

      Now I just check that it is correctly added to Komga by clicking on the 3 dots to the right of the library name and click on “Scan library files” to refresh if the cron timer hasn’t pass by yet.

      +

      Now I just check that it is correctly added to Komga by clicking on the 3 dots to the right of the library name and click on “Scan library files” to refresh if the cron timer hasn’t activated this yet.

      Then I check that the metadata is correct (once the manga is fully indexed), such as title, summary, chapter count, language, tags, genre, etc., which honestly it never works fine as mangal creates the series.json with the comicId field with an upper case I and Komga expects it to be a lower case i (comicid) so it falls back to using the info from the first chapter. I’ll probably will fix this on mangal side, and see how it goes.

      So, what I do is manually edit the metadata for the manga, by changing whatever it’s wrong or add what’s missing (I like adding anilist and MyAnimeList links) and then leave it as is.

      Automation

      -

      The straight forward approach for automation is just to bundle a bunch of mangal inline commands in a shell script and automate either via cron or systemd/Timers. But, as always, I overcomplicated/overengineered my approach, which is the following:

      +

      The straight forward approach for automation is just to bundle a bunch of mangal inline commands in a shell script and schedule it’s execution either via cron or systemd/Timers. But, as always, I overcomplicated/overengineered my approach, which is the following:

      1. Group manga names per source.
      2. +
      3. Configure anything that should always be set before executing mangal, this includes anilist bindings.
      4. Have a way to track the changes/updates on each run.
      5. Use that tracker to know where to start downloading chapters from.
          -
        • This is optional, as you can just do --chapters "all" and it will work. This is mostly to keep the logs/output cleaner/shorter.
        • +
        • This is optional, as you can just do --chapters "all" and it will work but I do it mostly to keep the logs/output cleaner/shorter.
      6. -
      7. Do any configuration needed beforehand.
      8. Download/update each manga using mangal inline.
      9. -
      10. Wrap everything in a systemd service and timer.
      11. +
      12. Wrap everything in a systemd service and timer.

      Manga list example:

      mangapill="Berserk|Chainsaw Man|Dandadan|Jujutsu Kaisen|etc..."
       
      -

      Bash function that handles the download per manga in the list:

      +

      Function that handles the download per manga in the list:

      mangal_src_dl () {
           source_name=$1
           manga_list=$(echo "$2" | tr '|' '\n')
      @@ -413,6 +425,22 @@ mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922

      }

      Where $TRACKER_FILE is just a variable holding a path to some file where you can store the tracking and $DOWNLOAD_FORMAT the format for the mangas, for me it’s cbz. Then the usage would be something like mangal_src_dl "Mangapill" "$mangapill", meaning that it is a function call per source.

      +

      A simpler function without “tracking” would be:

      +
      mangal_src_dl () {
      +    source_name=$1
      +    manga_list=$(echo "$2" | tr '|' '\n')
      +
      +    while IFS= read -r line; do
      +        echo "Downloading all chapters for $line from $source_name..."
      +        mangal inline -S "$source_name" -q "$line" -m "exact" -F "$DOWNLOAD_FORMAT" -c "all" -d
      +        if [ $? -ne 0 ]; then
      +            echo "Failed to download chapters for $line."
      +            continue
      +        fi
      +        echo "Finished downloading chapters for $line."
      +    done <<< "$manga_list"
      +}
      +

      The tracker file would have a format like follows:

      # Updated: 06/10/23 10:53:15 AM CST
       Berserk|0392|392|Mangapill
      @@ -420,8 +448,12 @@ Dandadan|0110|110|Mangapill
       ...
       

      And note that if you already had manga downloaded and you run the script for the first time, then it will show as if it downloaded everything from the first chapter, but that’s just how mangal works, it will actually just discover downloaded chapters and only download anything missing.

      -

      Any configuration the downloader/updater might need needs to be done before the mangal_src_dl calls. I like to configure mangal for download path, format, etc.. To clear the mangal cache and rod browser (headless browser used in some custom sources) as well as set up any anilist bindings. An example of an anilist binding I had to do is for Mushoku Tensei, as it has both a light novel and manga version, both having different information, for me it was mangal inline anilist set --name "Mushoku Tensei - Isekai Ittara Honki Dasu" --id 85564.

      -

      Finally is just a matter of using your prefered way of scheduling, I’ll use systemd/Timers but anything is fine. You could make the downloader script more sophisticated and only running every week on which each manga gets released usually, but that’s too much work, so I’ll just run it once daily probably, or 2-3 times daily.

      +

      Any configuration the downloader/updater might need needs to be done before the mangal_src_dl calls. I like to configure mangal for download path, format, etc.. I found that it is needed to clear the mangal and rod browser cache (headless browser used in some custom sources) from personal experience and from others: mangal#170 and kaizoku#89.

      +

      Also you should set any anilist binding necessary for the downloading (as the cache was cleared). An example of an anilist binding I had to do is for Mushoku Tensei, as it has both a light novel and manga version, which for me it’s the following binding:

      +
      mangal inline anilist set --name "Mushoku Tensei - Isekai Ittara Honki Dasu" --id 85564
      +
      +

      Finally is just a matter of using your prefered way of scheduling, I’ll use systemd/Timers but anything is fine. You could make the downloader script more sophisticated and only running every week on which each manga gets (usually) released but that’s too much work; I’ll just run it once daily probably.

      +

      A feature I want to add and probably will is sending notifications (probably through email) on a summary for manga downloaded or failed to download so I’m on top of the updates. For now this is good enough and it’s been working so far.

      Alternative downloaders

      Just for the record, here is a list of downloaders/scrapers I considered before starting to use mangal:

        @@ -457,6 +489,7 @@ Dandadan|0110|110|Mangapill

    This komga package creates a komga (service) user and group which is tied to the also included komga.service.

    Configure it by editing /etc/komga.conf:

    -
    SERVER_PORT=8989
    +
    SERVER_PORT=8989
     SERVER_SERVLET_CONTEXT_PATH=/ # this depends a lot of how it's going to be served (domain, subdomain, ip, etc)
     
     KOMGA_LIBRARIES_SCAN_CRON="0 0 * * * ?"
    @@ -122,14 +122,14 @@ KOMGA_DATABASE_BACKUP_SCHEDULE="0 0 */8 * * ?"
     

    My changes (shown above):

    • Port on 8989 because 8080 its too generic.
    • -
    • cron schedules -

      If you’re going to run it locally (or LAN/VPN) you can start the komga.service and access it via IP at http://<your-server-ip>:<port>(/base_url) as stated at Komga: Accessing the web interface, else continue with the next steps for the reverse proxy and certificate.

      +

      If you’re going to run it locally (or LAN/VPN) you can start the komga.service and access it via IP at http://<your-server-ip>:<port>(/base_url) as stated at Komga: Accessing the web interface, then you can continue with the mangal section, else continue with the next steps for the reverse proxy and certificate.

      Reverse proxy

      Create the reverse proxy configuration (this is for nginx). In my case I’ll use a subdomain, so this is a new config called komga.conf at the usual sites-available/enabled path:

      server {
      @@ -149,7 +149,7 @@ KOMGA_DATABASE_BACKUP_SCHEDULE="0 0 */8 * * ?"
           }
       }
       
      -

      If it’s going to be used as a subdir on another domain then just change the location (with /subdir instead of /) directive to the corresponding .conf file; be careful with the proxy_pass directive, it has to match what you configured at /etc/komga.conf for the SERVER_SERVLET_CONTEXT_PATH regardless of the /subdir you selected at location.

      +

      If it’s going to be used as a subdir on another domain then just change the location with /subdir instead of /; be careful with the proxy_pass directive, it has to match what you configured at /etc/komga.conf for the SERVER_SERVLET_CONTEXT_PATH regardless of the /subdir you selected at location.

      SSL certificate

      If using a subdir then the same certificate for the subdomain/domain should work fine and no extra stuff is needed, else if following along me then we can create/extend the certificate by running:

      certbot --nginx
      @@ -205,12 +205,12 @@ default:other::r-x
       

      So instad of installing with yay we’ll build it from source. We need to have go installed:

      pacman -S go
       
      -

      Then clone my fork of mangal and build/install it:

      +

      Then clone my fork of mangal and build/install it:

      git clone https://github.com/luevano/mangal.git # not sure if you can use SSH to clone
       cd mangal
       make install # or just `make build` and then move the binary to somewhere in your $PATH
       
      -

      This will use go install so it will install to a path specified by your environment variables, for more run go help install. It was installed to $HOME/.local/bin/go/mangal for me, then just make sure this is included in your PATH.

      +

      This will use go install so it will install to a path specified by the go environment variables, for more run go help install. It was installed to $HOME/.local/bin/go/mangal for me because my env vars, then just make sure this is included in your PATH.

      Check it was correctly installed by running mangal version, which should print something like:

      ▇▇▇ mangal
       
      @@ -238,11 +238,11 @@ mangal config set -k logs.write -v true # I like to get logs for what happens
       

      Usage

      Two main ways of using mangal:

        -
      • TUI: for initial browsing/downloading and testing things out. If the manga finished publishing, this should be enough.
      • -
      • inline: for automation on manga that is still publishing and I need to check/download every once in a while.
      • +
      • TUI: for initial browsing/downloading and testing things out. If the manga finished publishing, this should be enough.
      • +
      • inline: for automation on manga that is still publishing and I need to check/download every once in a while.

      Headless browser

      -

      Before continuing, I gotta say I went through some bullshit while trying to use the custom Lua scrapers that use the headless browser (actually just a wrapper of go-rod/rod, and honestly it is not really a “headless” browser, mangal “documentation” is just wrong). For mor on my rant check out my last entry.

      +

      Before continuing, I gotta say I went through some bullshit while trying to use the custom Lua scrapers that use the headless browser (actually just a wrapper of go-rod/rod, and honestly it is not really a “headless” browser, mangal “documentation” is just wrong). For more on my rant check out my last entry.

      There is no concrete documentation on the “headless” browser, only that it is automatically set up and ready to use… but it doesn’t install any library/dependency needed. I discovered the following libraries that were missing on my Arch minimal install:

      • library -> arch package containing it
      • @@ -268,18 +268,30 @@ mangal config set -k logs.write -v true # I like to get logs for what happens
        mangal
         

        Download manga using the TUI by selecting the source/scrapper, search the manga/comic you want and then you can select each chapter to download (use tab to select all). This is what I use when downloading manga that already finished publishing, or when I’m just searching and testing out how it downloads the manga (directory name, and manga information).

        -

        Note that some scrapters will contain duplicated chapters, as they have uploaded chapters from the community. This happens a lot with MangaDex.

        +

        Note that some scrapters will contain duplicated chapters, as they have multiple uploaded chapters from the community, usually for different scanlation groups. This happens a lot with MangaDex.

        Inline

        The inline mode is a single terminal command meant to be used to automate stuff or for more advanced options. You can peek a bit into the “documentation” which honestly its ass because it doesn’t explain much. The minimal command for inline according to the help is:

        mangal inline --manga <option> --query <manga-title>
         
        -

        But this will not produce anything because it also needs --source (or set the default using the config key downloader.default_sources) and either --json (for the search result) or --download to actually download whatever was found but it could download something you don’t want so do the --json first.

        +

        But this will not produce anything because it also needs --source (or set the default using the config key downloader.default_sources) and either --json which basically just does the search and returns the result in json format or --download to actually download whatever is found; I recommend to do --json first to check that the correct manga will be downloaded then do --download.

        Something not mentioned anywhere is the --manga flag options (found it at the source code), it has 3 available options:

        • first: first manga entry found for the search.
        • last: last manga entry found for the search.
        • exact: exact manga title match. This is the one I use.
        +

        Similar to --chapters, there are a few options not explained (that I found at the source code, too). I usually just use all but other options:

        +
          +
        • all: all chapters found in the chapter list.
        • +
        • first: first chapter found in the chapter list.
        • +
        • last: last chapter found in the chapter list
        • +
        • [from]-[to]: selector for the chapters found in the chapter list, index starts at 0.
            +
          • If the selectors (from or to) exceed the amount of chapters in the chapterlist it just adjusts to he maximum available.
          • +
          • I had to fix this at the source code because if you wanted to to be the last chapter, it did to + 1 and it failed due to index out of range.
          • +
          +
        • +
        • @[sub]@: not sure how this works exactly, my understanding is that it’s for “named” chapters.
        • +

        That said, I’ll do an example by using Mangapill as source, and will search for Demon Slayer: Kimetsu no Yaiba:

        1. Search first and make sure my command will pull the manga I want:
        2. @@ -287,16 +299,16 @@ mangal config set -k logs.write -v true # I like to get logs for what happens
          mangal inline --source "Mangapill" --manga "exact" --query "Kimetsu no Yaiba" --json | jq # I use jq to pretty format the output
           
            -
          1. -

            I make sure the json output contains the correct manga information: name, url, etc..

            +
          2. I make sure the json output contains the correct manga information: name, url, etc..
          3. +
          • You can also include the flag --include-anilist-manga to include anilist information (if any) so you can check that the correct anilist id is attached. If the correct one is not attached (and it exists) then you can run the command:
          -

          sh -mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922

          -

          Which means that all “searches” for that --name flag will be attached to that specific anilist ID. -3. If I’m okay with the outputs, then I change --json for --download to actually download:

          - +
          mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922
          +
          +

          Which means that all “searches” for that --name flag will be attached to that specific anilist ID.

          +
            +
          1. If I’m okay with the outputs, then I change --json for --download to actually download:
          mangal inline --source "Mangapill" --manga "exact" --query "Kimetsu no Yaiba" --download
           
          @@ -304,26 +316,26 @@ mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922

        3. Check if the manga is downloaded correctly. I do this by going to my download directory and checking the directory name (I’m picky with this stuff), that all chapters where downloaded, that it includes a correct series.json file and it contains a cover.<img-ext>; this usually means it correctly pulled information from anilist and that it will contain metadata Komga will be able to use.

        Komga library

        -

        Now I just check that it is correctly added to Komga by clicking on the 3 dots to the right of the library name and click on “Scan library files” to refresh if the cron timer hasn’t pass by yet.

        +

        Now I just check that it is correctly added to Komga by clicking on the 3 dots to the right of the library name and click on “Scan library files” to refresh if the cron timer hasn’t activated this yet.

        Then I check that the metadata is correct (once the manga is fully indexed), such as title, summary, chapter count, language, tags, genre, etc., which honestly it never works fine as mangal creates the series.json with the comicId field with an upper case I and Komga expects it to be a lower case i (comicid) so it falls back to using the info from the first chapter. I’ll probably will fix this on mangal side, and see how it goes.

        So, what I do is manually edit the metadata for the manga, by changing whatever it’s wrong or add what’s missing (I like adding anilist and MyAnimeList links) and then leave it as is.

        Automation

        -

        The straight forward approach for automation is just to bundle a bunch of mangal inline commands in a shell script and automate either via cron or systemd/Timers. But, as always, I overcomplicated/overengineered my approach, which is the following:

        +

        The straight forward approach for automation is just to bundle a bunch of mangal inline commands in a shell script and schedule it’s execution either via cron or systemd/Timers. But, as always, I overcomplicated/overengineered my approach, which is the following:

        1. Group manga names per source.
        2. +
        3. Configure anything that should always be set before executing mangal, this includes anilist bindings.
        4. Have a way to track the changes/updates on each run.
        5. Use that tracker to know where to start downloading chapters from.
            -
          • This is optional, as you can just do --chapters "all" and it will work. This is mostly to keep the logs/output cleaner/shorter.
          • +
          • This is optional, as you can just do --chapters "all" and it will work but I do it mostly to keep the logs/output cleaner/shorter.
        6. -
        7. Do any configuration needed beforehand.
        8. Download/update each manga using mangal inline.
        9. -
        10. Wrap everything in a systemd service and timer.
        11. +
        12. Wrap everything in a systemd service and timer.

        Manga list example:

        mangapill="Berserk|Chainsaw Man|Dandadan|Jujutsu Kaisen|etc..."
         
        -

        Bash function that handles the download per manga in the list:

        +

        Function that handles the download per manga in the list:

        mangal_src_dl () {
             source_name=$1
             manga_list=$(echo "$2" | tr '|' '\n')
        @@ -359,6 +371,22 @@ mangal inline anilist set --name "Kimetsu no Yaiba" --id 101922

        }

        Where $TRACKER_FILE is just a variable holding a path to some file where you can store the tracking and $DOWNLOAD_FORMAT the format for the mangas, for me it’s cbz. Then the usage would be something like mangal_src_dl "Mangapill" "$mangapill", meaning that it is a function call per source.

        +

        A simpler function without “tracking” would be:

        +
        mangal_src_dl () {
        +    source_name=$1
        +    manga_list=$(echo "$2" | tr '|' '\n')
        +
        +    while IFS= read -r line; do
        +        echo "Downloading all chapters for $line from $source_name..."
        +        mangal inline -S "$source_name" -q "$line" -m "exact" -F "$DOWNLOAD_FORMAT" -c "all" -d
        +        if [ $? -ne 0 ]; then
        +            echo "Failed to download chapters for $line."
        +            continue
        +        fi
        +        echo "Finished downloading chapters for $line."
        +    done <<< "$manga_list"
        +}
        +

        The tracker file would have a format like follows:

        # Updated: 06/10/23 10:53:15 AM CST
         Berserk|0392|392|Mangapill
        @@ -366,8 +394,12 @@ Dandadan|0110|110|Mangapill
         ...
         

        And note that if you already had manga downloaded and you run the script for the first time, then it will show as if it downloaded everything from the first chapter, but that’s just how mangal works, it will actually just discover downloaded chapters and only download anything missing.

        -

        Any configuration the downloader/updater might need needs to be done before the mangal_src_dl calls. I like to configure mangal for download path, format, etc.. To clear the mangal cache and rod browser (headless browser used in some custom sources) as well as set up any anilist bindings. An example of an anilist binding I had to do is for Mushoku Tensei, as it has both a light novel and manga version, both having different information, for me it was mangal inline anilist set --name "Mushoku Tensei - Isekai Ittara Honki Dasu" --id 85564.

        -

        Finally is just a matter of using your prefered way of scheduling, I’ll use systemd/Timers but anything is fine. You could make the downloader script more sophisticated and only running every week on which each manga gets released usually, but that’s too much work, so I’ll just run it once daily probably, or 2-3 times daily.

        +

        Any configuration the downloader/updater might need needs to be done before the mangal_src_dl calls. I like to configure mangal for download path, format, etc.. I found that it is needed to clear the mangal and rod browser cache (headless browser used in some custom sources) from personal experience and from others: mangal#170 and kaizoku#89.

        +

        Also you should set any anilist binding necessary for the downloading (as the cache was cleared). An example of an anilist binding I had to do is for Mushoku Tensei, as it has both a light novel and manga version, which for me it’s the following binding:

        +
        mangal inline anilist set --name "Mushoku Tensei - Isekai Ittara Honki Dasu" --id 85564
        +
        +

        Finally is just a matter of using your prefered way of scheduling, I’ll use systemd/Timers but anything is fine. You could make the downloader script more sophisticated and only running every week on which each manga gets (usually) released but that’s too much work; I’ll just run it once daily probably.

        +

        A feature I want to add and probably will is sending notifications (probably through email) on a summary for manga downloaded or failed to download so I’m on top of the updates. For now this is good enough and it’s been working so far.

        Alternative downloaders

        Just for the record, here is a list of downloaders/scrapers I considered before starting to use mangal:

          diff --git a/live/blog/sitemap.xml b/live/blog/sitemap.xml index 1904029..a30b52f 100644 --- a/live/blog/sitemap.xml +++ b/live/blog/sitemap.xml @@ -47,7 +47,7 @@ https://blog.luevano.xyz/a/manga_server_with_komga.html - 2023-06-10 + 2023-06-11 weekly 1.0 -- cgit v1.2.3-70-g09d2