I am seeking advice regarding my ebook collection on a Linux system, which is stored on an external drive and sorted into categories. However, there are still many unsorted ebooks. I have tried using Calibre for organization, but it creates duplicate files during import on my main drive where I don’t want to keep any media. I would like to:
- Use Calibre’s automatic organization (tags, etc.) without duplicating files
- Maintain my existing folder structure while using Calibre
- Automatically sort the remaining ebooks into my existing categories/folder structure
I am considering the use of symlinks if there is a simple way to automate the process due to my large collection.
Regarding automatic sorting by category, I am looking for a solution that doesn’t require manual organization or a significant time investment. I’m wondering if there’s a way to extract metadata based on file hashes or any other method that doesn’t involve manual work. Most of the files should have title and author metadata, but some won’t. I am not in a rush to solve this issue since I can still locate most ebooks by their title without any organization.
Has anyone encountered a similar problem and found a solution? I would appreciate any suggestions for tools, scripts, or workflows that might help. Thank you in advance for any advice!
deleted by creator
I hope someone gives you a good answer, because I’d like one myself. My method has just been to do this stuff little by little. I would also recommend calibre web for interfacing instead of calibre. You can run both in docker, and access calibre on your server from whatever computer you happen to be on. I find centralizing collections makes the task of managing them at least more mentally manageable.
You might want to give an idea of the size of your library. What some people consider large, others might consider nothing much. If it is exceedingly large you’re better off asking someplace with more data hoarders instead of a general Linux board.
I honestly don’t know that there is one. What OP is looking for is effectively an AI librarian… this is literally a full-time job for some people. I’m sure OP doesn’t have quite that many books, but the point remains
How many ebooks are you talking about (millions)? Is there just a question of finding duplicated files? That’s easy with a shell script. For metadata, see if the books already have it since a lot do. After that, you can use fairly crude hacks as an initial pass at matching library records. There’s code like that around already, try some web searches, maybe code4lib (library related programming) if that is still around. I saw your earlier comment before you deleted it and it was perfectly fine.