Merge branch 'janeczku:Develop' into Develop
This commit is contained in:
commit
5d86bb0ba0
20
.github/ISSUE_TEMPLATE/bug_report.md
vendored
20
.github/ISSUE_TEMPLATE/bug_report.md
vendored
|
@ -6,12 +6,23 @@ labels: ''
|
|||
assignees: ''
|
||||
|
||||
---
|
||||
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
|
||||
|
||||
**Describe the bug/problem**
|
||||
## Short Notice from the maintainer
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
|
||||
Please also have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||
|
||||
**Describe the bug/problem**
|
||||
|
||||
A clear and concise description of what the bug is. If you are asking for support, please check our [Wiki](https://github.com/janeczku/calibre-web/wiki) if your question is already answered there.
|
||||
|
||||
**To Reproduce**
|
||||
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
|
@ -19,15 +30,19 @@ Steps to reproduce the behavior:
|
|||
4. See error
|
||||
|
||||
**Logfile**
|
||||
|
||||
Add content of calibre-web.log file or the relevant error, try to reproduce your problem with "debug" log-level to get more output.
|
||||
|
||||
**Expected behavior**
|
||||
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Environment (please complete the following information):**
|
||||
|
||||
- OS: [e.g. Windows 10/Raspberry Pi OS]
|
||||
- Python version: [e.g. python2.7]
|
||||
- Calibre-Web version: [e.g. 0.6.8 or 087c4c59 (git rev-parse --short HEAD)]:
|
||||
|
@ -37,3 +52,4 @@ If applicable, add screenshots to help explain your problem.
|
|||
|
||||
**Additional context**
|
||||
Add any other context about the problem here. [e.g. access via reverse proxy, database background sync, special database location]
|
||||
|
||||
|
|
9
.github/ISSUE_TEMPLATE/feature_request.md
vendored
9
.github/ISSUE_TEMPLATE/feature_request.md
vendored
|
@ -7,7 +7,14 @@ assignees: ''
|
|||
|
||||
---
|
||||
|
||||
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
|
||||
# Short Notice from the maintainer
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
|
|
@ -1,3 +1,10 @@
|
|||
# Short Notice from the maintainer
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
# Calibre-Web
|
||||
|
||||
Calibre-Web is a web app that offers a clean and intuitive interface for browsing, reading, and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
|
||||
|
|
43
cps/__init__.py
Normal file → Executable file
43
cps/__init__.py
Normal file → Executable file
|
@ -103,7 +103,7 @@ web_server = WebServer()
|
|||
updater_thread = Updater()
|
||||
|
||||
if limiter_present:
|
||||
limiter = Limiter(key_func=True, headers_enabled=True, auto_check=False, swallow_errors=True)
|
||||
limiter = Limiter(key_func=True, headers_enabled=True, auto_check=False, swallow_errors=False)
|
||||
else:
|
||||
limiter = None
|
||||
|
||||
|
@ -125,13 +125,6 @@ def create_app():
|
|||
|
||||
ub.password_change(cli_param.user_credentials)
|
||||
|
||||
if not limiter:
|
||||
log.info('*** "flask-limiter" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-limiter" ***')
|
||||
print('*** "flask-limiter" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-limiter" ***')
|
||||
web_server.stop(True)
|
||||
sys.exit(8)
|
||||
if sys.version_info < (3, 0):
|
||||
log.info(
|
||||
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, '
|
||||
|
@ -141,13 +134,6 @@ def create_app():
|
|||
'please update your installation to Python3 ***')
|
||||
web_server.stop(True)
|
||||
sys.exit(5)
|
||||
if not wtf_present:
|
||||
log.info('*** "flask-WTF" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-WTF" ***')
|
||||
print('*** "flask-WTF" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-WTF" ***')
|
||||
web_server.stop(True)
|
||||
sys.exit(7)
|
||||
|
||||
lm.login_view = 'web.login'
|
||||
lm.anonymous_user = ub.Anonymous
|
||||
|
@ -158,13 +144,21 @@ def create_app():
|
|||
calibre_db.init_db()
|
||||
|
||||
updater_thread.init_updater(config, web_server)
|
||||
# Perform dry run of updater and exit afterwards
|
||||
# Perform dry run of updater and exit afterward
|
||||
if cli_param.dry_run:
|
||||
updater_thread.dry_run()
|
||||
sys.exit(0)
|
||||
updater_thread.start()
|
||||
|
||||
for res in dependency_check() + dependency_check(True):
|
||||
requirements = dependency_check()
|
||||
for res in requirements:
|
||||
if res['found'] == "not installed":
|
||||
message = ('Cannot import {name} module, it is needed to run calibre-web, '
|
||||
'please install it using "pip install {name}"').format(name=res["name"])
|
||||
log.info(message)
|
||||
print("*** " + message + " ***")
|
||||
web_server.stop(True)
|
||||
sys.exit(8)
|
||||
for res in requirements + dependency_check(True):
|
||||
log.info('*** "{}" version does not meet the requirements. '
|
||||
'Should: {}, Found: {}, please consider installing required version ***'
|
||||
.format(res['name'],
|
||||
|
@ -192,12 +186,21 @@ def create_app():
|
|||
services.ldap.init_app(app, config)
|
||||
if services.goodreads_support:
|
||||
services.goodreads_support.connect(config.config_goodreads_api_key,
|
||||
config.config_goodreads_api_secret_e,
|
||||
config.config_use_goodreads)
|
||||
config.store_calibre_uuid(calibre_db, db.Library_Id)
|
||||
# Configure rate limiter
|
||||
# https://limits.readthedocs.io/en/stable/storage.html
|
||||
app.config.update(RATELIMIT_ENABLED=config.config_ratelimiter)
|
||||
limiter.init_app(app)
|
||||
if config.config_limiter_uri != "" and not cli_param.memory_backend:
|
||||
app.config.update(RATELIMIT_STORAGE_URI=config.config_limiter_uri)
|
||||
if config.config_limiter_options != "":
|
||||
app.config.update(RATELIMIT_STORAGE_OPTIONS=config.config_limiter_options)
|
||||
try:
|
||||
limiter.init_app(app)
|
||||
except Exception as e:
|
||||
log.error('Wrong Flask Limiter configuration, falling back to default: {}'.format(e))
|
||||
app.config.update(RATELIMIT_STORAGE_URI=None)
|
||||
limiter.init_app(app)
|
||||
|
||||
# Register scheduled tasks
|
||||
from .schedule import register_scheduled_tasks, register_startup_tasks
|
||||
|
|
|
@ -49,9 +49,9 @@ sorted_modules = OrderedDict((sorted(modules.items(), key=lambda x: x[0].casefol
|
|||
|
||||
def collect_stats():
|
||||
if constants.NIGHTLY_VERSION[0] == "$Format:%H$":
|
||||
calibre_web_version = constants.STABLE_VERSION['version']
|
||||
calibre_web_version = constants.STABLE_VERSION['version'].replace("b", " Beta")
|
||||
else:
|
||||
calibre_web_version = (constants.STABLE_VERSION['version'] + ' - '
|
||||
calibre_web_version = (constants.STABLE_VERSION['version'].replace("b", " Beta") + ' - '
|
||||
+ constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - '
|
||||
+ constants.NIGHTLY_VERSION[1].replace('%', '%%'))
|
||||
|
||||
|
|
46
cps/admin.py
Normal file → Executable file
46
cps/admin.py
Normal file → Executable file
|
@ -48,6 +48,7 @@ from . import db, calibre_db, ub, web_server, config, updater_thread, gdriveutil
|
|||
kobo_sync_status, schedule
|
||||
from .helper import check_valid_domain, send_test_mail, reset_password, generate_password_hash, check_email, \
|
||||
valid_email, check_username
|
||||
from .embed_helper import get_calibre_binarypath
|
||||
from .gdriveutils import is_gdrive_ready, gdrive_support
|
||||
from .render_template import render_title_template, get_sidebar_config
|
||||
from .services.worker import WorkerThread
|
||||
|
@ -217,7 +218,7 @@ def admin():
|
|||
form_date += timedelta(hours=int(commit[20:22]), minutes=int(commit[23:]))
|
||||
commit = format_datetime(form_date - tz, format='short')
|
||||
else:
|
||||
commit = version['version']
|
||||
commit = version['version'].replace("b", " Beta")
|
||||
|
||||
all_user = ub.session.query(ub.User).all()
|
||||
# email_settings = mail_config.get_mail_settings()
|
||||
|
@ -916,11 +917,15 @@ def list_restriction(res_type, user_id):
|
|||
|
||||
@admi.route("/ajax/fullsync", methods=["POST"])
|
||||
@login_required
|
||||
def ajax_fullsync():
|
||||
count = ub.session.query(ub.KoboSyncedBooks).filter(current_user.id == ub.KoboSyncedBooks.user_id).delete()
|
||||
message = _("{} sync entries deleted").format(count)
|
||||
ub.session_commit(message)
|
||||
return Response(json.dumps([{"type": "success", "message": message}]), mimetype='application/json')
|
||||
def ajax_self_fullsync():
|
||||
return do_full_kobo_sync(current_user.id)
|
||||
|
||||
|
||||
@admi.route("/ajax/fullsync/<int:userid>", methods=["POST"])
|
||||
@login_required
|
||||
@admin_required
|
||||
def ajax_fullsync(userid):
|
||||
return do_full_kobo_sync(userid)
|
||||
|
||||
|
||||
@admi.route("/ajax/pathchooser/")
|
||||
|
@ -930,6 +935,13 @@ def ajax_pathchooser():
|
|||
return pathchooser()
|
||||
|
||||
|
||||
def do_full_kobo_sync(userid):
|
||||
count = ub.session.query(ub.KoboSyncedBooks).filter(userid == ub.KoboSyncedBooks.user_id).delete()
|
||||
message = _("{} sync entries deleted").format(count)
|
||||
ub.session_commit(message)
|
||||
return Response(json.dumps([{"type": "success", "message": message}]), mimetype='application/json')
|
||||
|
||||
|
||||
def check_valid_read_column(column):
|
||||
if column != "0":
|
||||
if not calibre_db.session.query(db.CustomColumns).filter(db.CustomColumns.id == column) \
|
||||
|
@ -1619,7 +1631,10 @@ def import_ldap_users():
|
|||
|
||||
imported = 0
|
||||
for username in new_users:
|
||||
user = username.decode('utf-8')
|
||||
if isinstance(username, bytes):
|
||||
user = username.decode('utf-8')
|
||||
else:
|
||||
user = username
|
||||
if '=' in user:
|
||||
# if member object field is empty take user object as filter
|
||||
if config.config_ldap_member_user_object:
|
||||
|
@ -1705,7 +1720,7 @@ def _db_configuration_update_helper():
|
|||
return _db_configuration_result('{}'.format(ex), gdrive_error)
|
||||
|
||||
if db_change or not db_valid or not config.db_configured \
|
||||
or config.config_calibre_dir != to_save["config_calibre_dir"]:
|
||||
or config.config_calibre_dir != to_save["config_calibre_dir"]:
|
||||
if not os.path.exists(metadata_db) or not to_save['config_calibre_dir']:
|
||||
return _db_configuration_result(_('DB Location is not Valid, Please Enter Correct Path'), gdrive_error)
|
||||
else:
|
||||
|
@ -1751,6 +1766,7 @@ def _configuration_update_helper():
|
|||
|
||||
_config_checkbox_int(to_save, "config_uploading")
|
||||
_config_checkbox_int(to_save, "config_unicode_filename")
|
||||
_config_checkbox_int(to_save, "config_embed_metadata")
|
||||
# Reboot on config_anonbrowse with enabled ldap, as decoraters are changed in this case
|
||||
reboot_required |= (_config_checkbox_int(to_save, "config_anonbrowse")
|
||||
and config.config_login_type == constants.LOGIN_LDAP)
|
||||
|
@ -1767,8 +1783,14 @@ def _configuration_update_helper():
|
|||
constants.EXTENSIONS_UPLOAD = config.config_upload_formats.split(',')
|
||||
|
||||
_config_string(to_save, "config_calibre")
|
||||
_config_string(to_save, "config_converterpath")
|
||||
_config_string(to_save, "config_binariesdir")
|
||||
_config_string(to_save, "config_kepubifypath")
|
||||
if "config_binariesdir" in to_save:
|
||||
calibre_status = helper.check_calibre(config.config_binariesdir)
|
||||
if calibre_status:
|
||||
return _configuration_result(calibre_status)
|
||||
to_save["config_converterpath"] = get_calibre_binarypath("ebook-convert")
|
||||
_config_string(to_save, "config_converterpath")
|
||||
|
||||
reboot_required |= _config_int(to_save, "config_login_type")
|
||||
|
||||
|
@ -1787,11 +1809,8 @@ def _configuration_update_helper():
|
|||
# Goodreads configuration
|
||||
_config_checkbox(to_save, "config_use_goodreads")
|
||||
_config_string(to_save, "config_goodreads_api_key")
|
||||
if to_save.get("config_goodreads_api_secret_e", ""):
|
||||
_config_string(to_save, "config_goodreads_api_secret_e")
|
||||
if services.goodreads_support:
|
||||
services.goodreads_support.connect(config.config_goodreads_api_key,
|
||||
config.config_goodreads_api_secret_e,
|
||||
config.config_use_goodreads)
|
||||
|
||||
_config_int(to_save, "config_updatechannel")
|
||||
|
@ -1815,6 +1834,7 @@ def _configuration_update_helper():
|
|||
_config_checkbox(to_save, "config_password_number")
|
||||
_config_checkbox(to_save, "config_password_lower")
|
||||
_config_checkbox(to_save, "config_password_upper")
|
||||
_config_checkbox(to_save, "config_password_character")
|
||||
_config_checkbox(to_save, "config_password_special")
|
||||
if 0 < int(to_save.get("config_password_min_length", "0")) < 41:
|
||||
_config_int(to_save, "config_password_min_length")
|
||||
|
@ -1822,6 +1842,8 @@ def _configuration_update_helper():
|
|||
return _configuration_result(_('Password length has to be between 1 and 40'))
|
||||
reboot_required |= _config_int(to_save, "config_session")
|
||||
reboot_required |= _config_checkbox(to_save, "config_ratelimiter")
|
||||
reboot_required |= _config_string(to_save, "config_limiter_uri")
|
||||
reboot_required |= _config_string(to_save, "config_limiter_options")
|
||||
|
||||
# Rarfile Content configuration
|
||||
_config_string(to_save, "config_rarfile_location")
|
||||
|
|
53
cps/clean_html.py
Normal file
53
cps/clean_html.py
Normal file
|
@ -0,0 +1,53 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2018-2019 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from . import logger
|
||||
from lxml.etree import ParserError
|
||||
|
||||
try:
|
||||
# at least bleach 6.0 is needed -> incomplatible change from list arguments to set arguments
|
||||
from bleach import clean_text as clean_html
|
||||
BLEACH = True
|
||||
except ImportError:
|
||||
try:
|
||||
BLEACH = False
|
||||
from nh3 import clean as clean_html
|
||||
except ImportError:
|
||||
try:
|
||||
BLEACH = False
|
||||
from lxml.html.clean import clean_html
|
||||
except ImportError:
|
||||
clean_html = None
|
||||
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
def clean_string(unsafe_text, book_id=0):
|
||||
try:
|
||||
if BLEACH:
|
||||
safe_text = clean_html(unsafe_text, tags=set(), attributes=set())
|
||||
else:
|
||||
safe_text = clean_html(unsafe_text)
|
||||
except ParserError as e:
|
||||
log.error("Comments of book {} are corrupted: {}".format(book_id, e))
|
||||
safe_text = ""
|
||||
except TypeError as e:
|
||||
log.error("Comments can't be parsed, maybe 'lxml' is too new, try installing 'bleach': {}".format(e))
|
||||
safe_text = ""
|
||||
return safe_text
|
|
@ -29,8 +29,8 @@ from .constants import DEFAULT_SETTINGS_FILE, DEFAULT_GDRIVE_FILE
|
|||
|
||||
def version_info():
|
||||
if _NIGHTLY_VERSION[1].startswith('$Format'):
|
||||
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version']
|
||||
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'], _NIGHTLY_VERSION[1])
|
||||
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version'].replace("b", " Beta")
|
||||
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'].replace("b", " Beta"), _NIGHTLY_VERSION[1])
|
||||
|
||||
|
||||
class CliParameter(object):
|
||||
|
@ -52,6 +52,7 @@ class CliParameter(object):
|
|||
parser.add_argument('-v', '--version', action='version', help='Shows version number and exits Calibre-Web',
|
||||
version=version_info())
|
||||
parser.add_argument('-i', metavar='ip-address', help='Server IP-Address to listen')
|
||||
parser.add_argument('-m', action='store_true', help='Use Memory-backend as limiter backend, use this parameter in case of miss configured backend')
|
||||
parser.add_argument('-s', metavar='user:pass',
|
||||
help='Sets specific username to new password and exits Calibre-Web')
|
||||
parser.add_argument('-f', action='store_true', help='Flag is depreciated and will be removed in next version')
|
||||
|
@ -98,6 +99,8 @@ class CliParameter(object):
|
|||
if args.k == "":
|
||||
self.keyfilepath = ""
|
||||
|
||||
# overwrite limiter backend
|
||||
self.memory_backend = args.m or None
|
||||
# dry run updater
|
||||
self.dry_run = args.d or None
|
||||
# enable reconnect endpoint for docker database reconnect
|
||||
|
|
|
@ -102,7 +102,7 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
|
|||
extension = ext[1].lower()
|
||||
if extension in cover.COVER_EXTENSIONS:
|
||||
try:
|
||||
cover_data = cf.read(name)[name].read()
|
||||
cover_data = cf.read([name])[name].read()
|
||||
except (py7zr.Bad7zFile, OSError) as ex:
|
||||
log.error('7Zip file failed with error: {}'.format(ex))
|
||||
break
|
||||
|
|
|
@ -34,6 +34,7 @@ except ImportError:
|
|||
from sqlalchemy.ext.declarative import declarative_base
|
||||
|
||||
from . import constants, logger
|
||||
from .subproc_wrapper import process_wait
|
||||
|
||||
|
||||
log = logger.create()
|
||||
|
@ -113,8 +114,6 @@ class _Settings(_Base):
|
|||
|
||||
config_use_goodreads = Column(Boolean, default=False)
|
||||
config_goodreads_api_key = Column(String)
|
||||
config_goodreads_api_secret_e = Column(String)
|
||||
config_goodreads_api_secret = Column(String)
|
||||
config_register_email = Column(Boolean, default=False)
|
||||
config_login_type = Column(Integer, default=0)
|
||||
|
||||
|
@ -140,10 +139,12 @@ class _Settings(_Base):
|
|||
|
||||
config_kepubifypath = Column(String, default=None)
|
||||
config_converterpath = Column(String, default=None)
|
||||
config_binariesdir = Column(String, default=None)
|
||||
config_calibre = Column(String)
|
||||
config_rarfile_location = Column(String, default=None)
|
||||
config_upload_formats = Column(String, default=','.join(constants.EXTENSIONS_UPLOAD))
|
||||
config_unicode_filename = Column(Boolean, default=False)
|
||||
config_embed_metadata = Column(Boolean, default=True)
|
||||
|
||||
config_updatechannel = Column(Integer, default=constants.UPDATE_STABLE)
|
||||
|
||||
|
@ -162,9 +163,12 @@ class _Settings(_Base):
|
|||
config_password_number = Column(Boolean, default=True)
|
||||
config_password_lower = Column(Boolean, default=True)
|
||||
config_password_upper = Column(Boolean, default=True)
|
||||
config_password_character = Column(Boolean, default=True)
|
||||
config_password_special = Column(Boolean, default=True)
|
||||
config_session = Column(Integer, default=1)
|
||||
config_ratelimiter = Column(Boolean, default=True)
|
||||
config_limiter_uri = Column(String, default="")
|
||||
config_limiter_options = Column(String, default="")
|
||||
|
||||
def __repr__(self):
|
||||
return self.__class__.__name__
|
||||
|
@ -186,9 +190,11 @@ class ConfigSQL(object):
|
|||
self.load()
|
||||
|
||||
change = False
|
||||
if self.config_converterpath == None: # pylint: disable=access-member-before-definition
|
||||
|
||||
if self.config_binariesdir == None: # pylint: disable=access-member-before-definition
|
||||
change = True
|
||||
self.config_converterpath = autodetect_calibre_binary()
|
||||
self.config_binariesdir = autodetect_calibre_binaries()
|
||||
self.config_converterpath = autodetect_converter_binary(self.config_binariesdir)
|
||||
|
||||
if self.config_kepubifypath == None: # pylint: disable=access-member-before-definition
|
||||
change = True
|
||||
|
@ -414,19 +420,13 @@ def _encrypt_fields(session, secret_key):
|
|||
except OperationalError:
|
||||
with session.bind.connect() as conn:
|
||||
conn.execute(text("ALTER TABLE settings ADD column 'mail_password_e' String"))
|
||||
conn.execute(text("ALTER TABLE settings ADD column 'config_goodreads_api_secret_e' String"))
|
||||
conn.execute(text("ALTER TABLE settings ADD column 'config_ldap_serv_password_e' String"))
|
||||
session.commit()
|
||||
crypter = Fernet(secret_key)
|
||||
settings = session.query(_Settings.mail_password, _Settings.config_goodreads_api_secret,
|
||||
_Settings.config_ldap_serv_password).first()
|
||||
settings = session.query(_Settings.mail_password, _Settings.config_ldap_serv_password).first()
|
||||
if settings.mail_password:
|
||||
session.query(_Settings).update(
|
||||
{_Settings.mail_password_e: crypter.encrypt(settings.mail_password.encode())})
|
||||
if settings.config_goodreads_api_secret:
|
||||
session.query(_Settings).update(
|
||||
{_Settings.config_goodreads_api_secret_e:
|
||||
crypter.encrypt(settings.config_goodreads_api_secret.encode())})
|
||||
if settings.config_ldap_serv_password:
|
||||
session.query(_Settings).update(
|
||||
{_Settings.config_ldap_serv_password_e:
|
||||
|
@ -474,17 +474,35 @@ def _migrate_table(session, orm_class, secret_key=None):
|
|||
session.rollback()
|
||||
|
||||
|
||||
def autodetect_calibre_binary():
|
||||
def autodetect_calibre_binaries():
|
||||
if sys.platform == "win32":
|
||||
calibre_path = ["C:\\program files\\calibre\\ebook-convert.exe",
|
||||
"C:\\program files(x86)\\calibre\\ebook-convert.exe",
|
||||
"C:\\program files(x86)\\calibre2\\ebook-convert.exe",
|
||||
"C:\\program files\\calibre2\\ebook-convert.exe"]
|
||||
calibre_path = ["C:\\program files\\calibre\\",
|
||||
"C:\\program files(x86)\\calibre\\",
|
||||
"C:\\program files(x86)\\calibre2\\",
|
||||
"C:\\program files\\calibre2\\"]
|
||||
else:
|
||||
calibre_path = ["/opt/calibre/ebook-convert"]
|
||||
calibre_path = ["/opt/calibre/"]
|
||||
for element in calibre_path:
|
||||
if os.path.isfile(element) and os.access(element, os.X_OK):
|
||||
return element
|
||||
supported_binary_paths = [os.path.join(element, binary)
|
||||
for binary in constants.SUPPORTED_CALIBRE_BINARIES.values()]
|
||||
if all(os.path.isfile(binary_path) and os.access(binary_path, os.X_OK)
|
||||
for binary_path in supported_binary_paths):
|
||||
values = [process_wait([binary_path, "--version"],
|
||||
pattern=r'\(calibre (.*)\)') for binary_path in supported_binary_paths]
|
||||
if all(values):
|
||||
version = values[0].group(1)
|
||||
log.debug("calibre version %s", version)
|
||||
return element
|
||||
return ""
|
||||
|
||||
|
||||
def autodetect_converter_binary(calibre_path):
|
||||
if sys.platform == "win32":
|
||||
converter_path = os.path.join(calibre_path, "ebook-convert.exe")
|
||||
else:
|
||||
converter_path = os.path.join(calibre_path, "ebook-convert")
|
||||
if calibre_path and os.path.isfile(converter_path) and os.access(converter_path, os.X_OK):
|
||||
return converter_path
|
||||
return ""
|
||||
|
||||
|
||||
|
|
|
@ -156,6 +156,11 @@ EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr'
|
|||
'prc', 'doc', 'docx', 'fb2', 'html', 'rtf', 'lit', 'odt', 'mp3', 'mp4', 'ogg',
|
||||
'opus', 'wav', 'flac', 'm4a', 'm4b'}
|
||||
|
||||
_extension = ""
|
||||
if sys.platform == "win32":
|
||||
_extension = ".exe"
|
||||
SUPPORTED_CALIBRE_BINARIES = {binary:binary + _extension for binary in ["ebook-convert", "calibredb"]}
|
||||
|
||||
|
||||
def has_flag(value, bit_flag):
|
||||
return bit_flag == (bit_flag & (value or 0))
|
||||
|
@ -169,13 +174,11 @@ BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, d
|
|||
'series_id, languages, publisher, pubdate, identifiers')
|
||||
|
||||
# python build process likes to have x.y.zbw -> b for beta and w a counting number
|
||||
STABLE_VERSION = {'version': '0.6.22 Beta'}
|
||||
STABLE_VERSION = {'version': '0.6.22b'}
|
||||
|
||||
NIGHTLY_VERSION = dict()
|
||||
NIGHTLY_VERSION[0] = '$Format:%H$'
|
||||
NIGHTLY_VERSION[1] = '$Format:%cI$'
|
||||
# NIGHTLY_VERSION[0] = 'bb7d2c6273ae4560e83950d36d64533343623a57'
|
||||
# NIGHTLY_VERSION[1] = '2018-09-09T10:13:08+02:00'
|
||||
|
||||
# CACHE
|
||||
CACHE_TYPE_THUMBNAILS = 'thumbnails'
|
||||
|
|
|
@ -839,8 +839,7 @@ class CalibreDB:
|
|||
entries = list()
|
||||
pagination = list()
|
||||
try:
|
||||
pagination = Pagination(page, pagesize,
|
||||
len(query.all()))
|
||||
pagination = Pagination(page, pagesize, query.count())
|
||||
entries = query.order_by(*order).offset(off).limit(pagesize).all()
|
||||
except Exception as ex:
|
||||
log.error_or_exception(ex)
|
||||
|
|
|
@ -27,22 +27,22 @@ from shutil import copyfile
|
|||
from uuid import uuid4
|
||||
from markupsafe import escape, Markup # dependency of flask
|
||||
from functools import wraps
|
||||
from lxml.etree import ParserError
|
||||
# from lxml.etree import ParserError
|
||||
|
||||
try:
|
||||
# at least bleach 6.0 is needed -> incomplatible change from list arguments to set arguments
|
||||
from bleach import clean_text as clean_html
|
||||
BLEACH = True
|
||||
except ImportError:
|
||||
try:
|
||||
from nh3 import clean as clean_html
|
||||
BLEACH = False
|
||||
except ImportError:
|
||||
try:
|
||||
from lxml.html.clean import clean_html
|
||||
BLEACH = False
|
||||
except ImportError:
|
||||
clean_html = None
|
||||
#try:
|
||||
# # at least bleach 6.0 is needed -> incomplatible change from list arguments to set arguments
|
||||
# from bleach import clean_text as clean_html
|
||||
# BLEACH = True
|
||||
#except ImportError:
|
||||
# try:
|
||||
# BLEACH = False
|
||||
# from nh3 import clean as clean_html
|
||||
# except ImportError:
|
||||
# try:
|
||||
# BLEACH = False
|
||||
# from lxml.html.clean import clean_html
|
||||
# except ImportError:
|
||||
# clean_html = None
|
||||
|
||||
from flask import Blueprint, request, flash, redirect, url_for, abort, Response
|
||||
from flask_babel import gettext as _
|
||||
|
@ -54,12 +54,14 @@ from sqlalchemy.orm.exc import StaleDataError
|
|||
from sqlalchemy.sql.expression import func
|
||||
|
||||
from . import constants, logger, isoLanguages, gdriveutils, uploader, helper, kobo_sync_status
|
||||
from .clean_html import clean_string
|
||||
from . import config, ub, db, calibre_db
|
||||
from .services.worker import WorkerThread
|
||||
from .tasks.upload import TaskUpload
|
||||
from .render_template import render_title_template
|
||||
from .usermanagement import login_required_if_no_ano
|
||||
from .kobo_sync_status import change_archived_books
|
||||
from .redirect import get_redirect_location
|
||||
|
||||
|
||||
editbook = Blueprint('edit-book', __name__)
|
||||
|
@ -96,7 +98,7 @@ def delete_book_from_details(book_id):
|
|||
@editbook.route("/delete/<int:book_id>/<string:book_format>", methods=["POST"])
|
||||
@login_required
|
||||
def delete_book_ajax(book_id, book_format):
|
||||
return delete_book_from_table(book_id, book_format, False)
|
||||
return delete_book_from_table(book_id, book_format, False, request.form.to_dict().get('location', ""))
|
||||
|
||||
|
||||
@editbook.route("/admin/book/<int:book_id>", methods=['GET'])
|
||||
|
@ -823,7 +825,7 @@ def delete_whole_book(book_id, book):
|
|||
calibre_db.session.query(db.Books).filter(db.Books.id == book_id).delete()
|
||||
|
||||
|
||||
def render_delete_book_result(book_format, json_response, warning, book_id):
|
||||
def render_delete_book_result(book_format, json_response, warning, book_id, location=""):
|
||||
if book_format:
|
||||
if json_response:
|
||||
return json.dumps([warning, {"location": url_for("edit-book.show_edit_book", book_id=book_id),
|
||||
|
@ -835,16 +837,16 @@ def render_delete_book_result(book_format, json_response, warning, book_id):
|
|||
return redirect(url_for('edit-book.show_edit_book', book_id=book_id))
|
||||
else:
|
||||
if json_response:
|
||||
return json.dumps([warning, {"location": url_for('web.index'),
|
||||
return json.dumps([warning, {"location": get_redirect_location(location, "web.index"),
|
||||
"type": "success",
|
||||
"format": book_format,
|
||||
"message": _('Book Successfully Deleted')}])
|
||||
else:
|
||||
flash(_('Book Successfully Deleted'), category="success")
|
||||
return redirect(url_for('web.index'))
|
||||
return redirect(get_redirect_location(location, "web.index"))
|
||||
|
||||
|
||||
def delete_book_from_table(book_id, book_format, json_response):
|
||||
def delete_book_from_table(book_id, book_format, json_response, location=""):
|
||||
warning = {}
|
||||
if current_user.role_delete_books():
|
||||
book = calibre_db.get_book(book_id)
|
||||
|
@ -891,7 +893,7 @@ def delete_book_from_table(book_id, book_format, json_response):
|
|||
else:
|
||||
# book not found
|
||||
log.error('Book with id "%s" could not be deleted: not found', book_id)
|
||||
return render_delete_book_result(book_format, json_response, warning, book_id)
|
||||
return render_delete_book_result(book_format, json_response, warning, book_id, location)
|
||||
message = _("You are missing permissions to delete books")
|
||||
if json_response:
|
||||
return json.dumps({"location": url_for("edit-book.show_edit_book", book_id=book_id),
|
||||
|
@ -1003,14 +1005,18 @@ def edit_book_series_index(series_index, book):
|
|||
def edit_book_comments(comments, book):
|
||||
modify_date = False
|
||||
if comments:
|
||||
try:
|
||||
if BLEACH:
|
||||
comments = clean_html(comments, tags=set(), attributes=set())
|
||||
else:
|
||||
comments = clean_html(comments)
|
||||
except ParserError as e:
|
||||
log.error("Comments of book {} are corrupted: {}".format(book.id, e))
|
||||
comments = ""
|
||||
comments = clean_string(comments, book.id)
|
||||
#try:
|
||||
# if BLEACH:
|
||||
# comments = clean_html(comments, tags=set(), attributes=set())
|
||||
# else:
|
||||
# comments = clean_html(comments)
|
||||
#except ParserError as e:
|
||||
# log.error("Comments of book {} are corrupted: {}".format(book.id, e))
|
||||
# comments = ""
|
||||
#except TypeError as e:
|
||||
# log.error("Comments can't be parsed, maybe 'lxml' is too new, try installing 'bleach': {}".format(e))
|
||||
# comments = ""
|
||||
if len(book.comments):
|
||||
if book.comments[0].text != comments:
|
||||
book.comments[0].text = comments
|
||||
|
@ -1068,7 +1074,19 @@ def edit_cc_data_value(book_id, book, c, to_save, cc_db_value, cc_string):
|
|||
elif c.datatype == 'comments':
|
||||
to_save[cc_string] = Markup(to_save[cc_string]).unescape()
|
||||
if to_save[cc_string]:
|
||||
to_save[cc_string] = clean_html(to_save[cc_string])
|
||||
to_save[cc_string] = clean_string(to_save[cc_string], book_id)
|
||||
#try:
|
||||
# if BLEACH:
|
||||
# to_save[cc_string] = clean_html(to_save[cc_string], tags=set(), attributes=set())
|
||||
# else:
|
||||
# to_save[cc_string] = clean_html(to_save[cc_string])
|
||||
#except ParserError as e:
|
||||
# log.error("Customs Comments of book {} are corrupted: {}".format(book_id, e))
|
||||
# to_save[cc_string] = ""
|
||||
#except TypeError as e:
|
||||
# to_save[cc_string] = ""
|
||||
# log.error("Customs Comments can't be parsed, maybe 'lxml' is too new, "
|
||||
# "try installing 'bleach': {}".format(e))
|
||||
elif c.datatype == 'datetime':
|
||||
try:
|
||||
to_save[cc_string] = datetime.strptime(to_save[cc_string], "%Y-%m-%d")
|
||||
|
|
63
cps/embed_helper.py
Normal file
63
cps/embed_helper.py
Normal file
|
@ -0,0 +1,63 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2024 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
from uuid import uuid4
|
||||
import os
|
||||
|
||||
from .file_helper import get_temp_dir
|
||||
from .subproc_wrapper import process_open
|
||||
from . import logger, config
|
||||
from .constants import SUPPORTED_CALIBRE_BINARIES
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
def do_calibre_export(book_id, book_format):
|
||||
try:
|
||||
quotes = [3, 5, 7, 9]
|
||||
tmp_dir = get_temp_dir()
|
||||
calibredb_binarypath = get_calibre_binarypath("calibredb")
|
||||
temp_file_name = str(uuid4())
|
||||
my_env = os.environ.copy()
|
||||
if config.config_calibre_split:
|
||||
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||
library_path = config.config_calibre_split_dir
|
||||
else:
|
||||
library_path = config.config_calibre_dir
|
||||
opf_command = [calibredb_binarypath, 'export', '--dont-write-opf', '--with-library', library_path,
|
||||
'--to-dir', tmp_dir, '--formats', book_format, "--template", "{}".format(temp_file_name),
|
||||
str(book_id)]
|
||||
p = process_open(opf_command, quotes, my_env)
|
||||
_, err = p.communicate()
|
||||
if err:
|
||||
log.error('Metadata embedder encountered an error: %s', err)
|
||||
return tmp_dir, temp_file_name
|
||||
except OSError as ex:
|
||||
# ToDo real error handling
|
||||
log.error_or_exception(ex)
|
||||
return None, None
|
||||
|
||||
|
||||
def get_calibre_binarypath(binary):
|
||||
binariesdir = config.config_binariesdir
|
||||
if binariesdir:
|
||||
try:
|
||||
return os.path.join(binariesdir, SUPPORTED_CALIBRE_BINARIES[binary])
|
||||
except KeyError as ex:
|
||||
log.error("Binary not supported by Calibre-Web: %s", SUPPORTED_CALIBRE_BINARIES[binary])
|
||||
pass
|
||||
return ""
|
29
cps/epub.py
29
cps/epub.py
|
@ -23,10 +23,12 @@ from lxml import etree
|
|||
from . import isoLanguages, cover
|
||||
from . import config, logger
|
||||
from .helper import split_authors
|
||||
from .epub_helper import get_content_opf, default_ns
|
||||
from .constants import BookMeta
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
def _extract_cover(zip_file, cover_file, cover_path, tmp_file_name):
|
||||
if cover_file is None:
|
||||
return None
|
||||
|
@ -44,25 +46,15 @@ def _extract_cover(zip_file, cover_file, cover_path, tmp_file_name):
|
|||
return cover.cover_processing(tmp_file_name, cf, extension)
|
||||
|
||||
def get_epub_layout(book, book_data):
|
||||
ns = {
|
||||
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
|
||||
'pkg': 'http://www.idpf.org/2007/opf',
|
||||
}
|
||||
file_path = os.path.normpath(os.path.join(config.get_book_path(),
|
||||
book.path, book_data.name + "." + book_data.format.lower()))
|
||||
|
||||
try:
|
||||
epubZip = zipfile.ZipFile(file_path)
|
||||
txt = epubZip.read('META-INF/container.xml')
|
||||
tree = etree.fromstring(txt)
|
||||
cfname = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
||||
cf = epubZip.read(cfname)
|
||||
tree, __ = get_content_opf(file_path, default_ns)
|
||||
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
|
||||
|
||||
tree = etree.fromstring(cf)
|
||||
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=ns)[0]
|
||||
|
||||
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=ns)
|
||||
except (etree.XMLSyntaxError, KeyError, IndexError) as e:
|
||||
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=default_ns)
|
||||
except (etree.XMLSyntaxError, KeyError, IndexError, OSError) as e:
|
||||
log.error("Could not parse epub metadata of book {} during kobo sync: {}".format(book.id, e))
|
||||
layout = []
|
||||
|
||||
|
@ -79,13 +71,7 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
|||
'dc': 'http://purl.org/dc/elements/1.1/'
|
||||
}
|
||||
|
||||
epub_zip = zipfile.ZipFile(tmp_file_path)
|
||||
|
||||
txt = epub_zip.read('META-INF/container.xml')
|
||||
tree = etree.fromstring(txt)
|
||||
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
||||
cf = epub_zip.read(cf_name)
|
||||
tree = etree.fromstring(cf)
|
||||
tree, cf_name = get_content_opf(tmp_file_path, ns)
|
||||
|
||||
cover_path = os.path.dirname(cf_name)
|
||||
|
||||
|
@ -128,6 +114,7 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
|||
|
||||
epub_metadata = parse_epub_series(ns, tree, epub_metadata)
|
||||
|
||||
epub_zip = zipfile.ZipFile(tmp_file_path)
|
||||
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
|
||||
|
||||
identifiers = []
|
||||
|
|
166
cps/epub_helper.py
Normal file
166
cps/epub_helper.py
Normal file
|
@ -0,0 +1,166 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2018 lemmsh, Kennyl, Kyosfonica, matthazinski
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import zipfile
|
||||
from lxml import etree
|
||||
|
||||
from . import isoLanguages
|
||||
|
||||
default_ns = {
|
||||
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
|
||||
'pkg': 'http://www.idpf.org/2007/opf',
|
||||
}
|
||||
|
||||
OPF_NAMESPACE = "http://www.idpf.org/2007/opf"
|
||||
PURL_NAMESPACE = "http://purl.org/dc/elements/1.1/"
|
||||
|
||||
OPF = "{%s}" % OPF_NAMESPACE
|
||||
PURL = "{%s}" % PURL_NAMESPACE
|
||||
|
||||
etree.register_namespace("opf", OPF_NAMESPACE)
|
||||
etree.register_namespace("dc", PURL_NAMESPACE)
|
||||
|
||||
OPF_NS = {None: OPF_NAMESPACE} # the default namespace (no prefix)
|
||||
NSMAP = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
|
||||
|
||||
|
||||
def updateEpub(src, dest, filename, data, ):
|
||||
# create a temp copy of the archive without filename
|
||||
with zipfile.ZipFile(src, 'r') as zin:
|
||||
with zipfile.ZipFile(dest, 'w') as zout:
|
||||
zout.comment = zin.comment # preserve the comment
|
||||
for item in zin.infolist():
|
||||
if item.filename != filename:
|
||||
zout.writestr(item, zin.read(item.filename))
|
||||
|
||||
# now add filename with its new data
|
||||
with zipfile.ZipFile(dest, mode='a', compression=zipfile.ZIP_DEFLATED) as zf:
|
||||
zf.writestr(filename, data)
|
||||
|
||||
|
||||
def get_content_opf(file_path, ns=default_ns):
|
||||
epubZip = zipfile.ZipFile(file_path)
|
||||
txt = epubZip.read('META-INF/container.xml')
|
||||
tree = etree.fromstring(txt)
|
||||
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
||||
cf = epubZip.read(cf_name)
|
||||
|
||||
return etree.fromstring(cf), cf_name
|
||||
|
||||
|
||||
def create_new_metadata_backup(book, custom_columns, export_language, translated_cover_name, lang_type=3):
|
||||
# generate root package element
|
||||
package = etree.Element(OPF + "package", nsmap=OPF_NS)
|
||||
package.set("unique-identifier", "uuid_id")
|
||||
package.set("version", "2.0")
|
||||
|
||||
# generate metadata element and all sub elements of it
|
||||
metadata = etree.SubElement(package, "metadata", nsmap=NSMAP)
|
||||
identifier = etree.SubElement(metadata, PURL + "identifier", id="calibre_id", nsmap=NSMAP)
|
||||
identifier.set(OPF + "scheme", "calibre")
|
||||
identifier.text = str(book.id)
|
||||
identifier2 = etree.SubElement(metadata, PURL + "identifier", id="uuid_id", nsmap=NSMAP)
|
||||
identifier2.set(OPF + "scheme", "uuid")
|
||||
identifier2.text = book.uuid
|
||||
for i in book.identifiers:
|
||||
identifier = etree.SubElement(metadata, PURL + "identifier", nsmap=NSMAP)
|
||||
identifier.set(OPF + "scheme", i.format_type())
|
||||
identifier.text = str(i.val)
|
||||
title = etree.SubElement(metadata, PURL + "title", nsmap=NSMAP)
|
||||
title.text = book.title
|
||||
for author in book.authors:
|
||||
creator = etree.SubElement(metadata, PURL + "creator", nsmap=NSMAP)
|
||||
creator.text = str(author.name)
|
||||
creator.set(OPF + "file-as", book.author_sort) # ToDo Check
|
||||
creator.set(OPF + "role", "aut")
|
||||
contributor = etree.SubElement(metadata, PURL + "contributor", nsmap=NSMAP)
|
||||
contributor.text = "calibre (5.7.2) [https://calibre-ebook.com]"
|
||||
contributor.set(OPF + "file-as", "calibre") # ToDo Check
|
||||
contributor.set(OPF + "role", "bkp")
|
||||
|
||||
date = etree.SubElement(metadata, PURL + "date", nsmap=NSMAP)
|
||||
date.text = '{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(d=book.pubdate)
|
||||
if book.comments and book.comments[0].text:
|
||||
for b in book.comments:
|
||||
description = etree.SubElement(metadata, PURL + "description", nsmap=NSMAP)
|
||||
description.text = b.text
|
||||
for b in book.publishers:
|
||||
publisher = etree.SubElement(metadata, PURL + "publisher", nsmap=NSMAP)
|
||||
publisher.text = str(b.name)
|
||||
if not book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = export_language
|
||||
else:
|
||||
for b in book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = str(b.lang_code) if lang_type == 3 else isoLanguages.get(part3=b.lang_code).part1
|
||||
for b in book.tags:
|
||||
subject = etree.SubElement(metadata, PURL + "subject", nsmap=NSMAP)
|
||||
subject.text = str(b.name)
|
||||
etree.SubElement(metadata, "meta", name="calibre:author_link_map",
|
||||
content="{" + ", ".join(['"' + str(a.name) + '": ""' for a in book.authors]) + "}",
|
||||
nsmap=NSMAP)
|
||||
for b in book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series",
|
||||
content=str(str(b.name)),
|
||||
nsmap=NSMAP)
|
||||
if book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series_index",
|
||||
content=str(book.series_index),
|
||||
nsmap=NSMAP)
|
||||
if len(book.ratings) and book.ratings[0].rating > 0:
|
||||
etree.SubElement(metadata, "meta", name="calibre:rating",
|
||||
content=str(book.ratings[0].rating),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:timestamp",
|
||||
content='{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(
|
||||
d=book.timestamp),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:title_sort",
|
||||
content=book.sort,
|
||||
nsmap=NSMAP)
|
||||
sequence = 0
|
||||
for cc in custom_columns:
|
||||
value = None
|
||||
extra = None
|
||||
cc_entry = getattr(book, "custom_column_" + str(cc.id))
|
||||
if cc_entry.__len__():
|
||||
value = [c.value for c in cc_entry] if cc.is_multiple else cc_entry[0].value
|
||||
extra = cc_entry[0].extra if hasattr(cc_entry[0], "extra") else None
|
||||
etree.SubElement(metadata, "meta", name="calibre:user_metadata:#{}".format(cc.label),
|
||||
content=cc.to_json(value, extra, sequence),
|
||||
nsmap=NSMAP)
|
||||
sequence += 1
|
||||
|
||||
# generate guide element and all sub elements of it
|
||||
# Title is translated from default export language
|
||||
guide = etree.SubElement(package, "guide")
|
||||
etree.SubElement(guide, "reference", type="cover", title=translated_cover_name, href="cover.jpg")
|
||||
|
||||
return package
|
||||
|
||||
def replace_metadata(tree, package):
|
||||
rep_element = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
|
||||
new_element = package.xpath('//metadata', namespaces=default_ns)[0]
|
||||
tree.replace(rep_element, new_element)
|
||||
return etree.tostring(tree,
|
||||
xml_declaration=True,
|
||||
encoding='utf-8',
|
||||
pretty_print=True).decode('utf-8')
|
||||
|
||||
|
32
cps/file_helper.py
Normal file
32
cps/file_helper.py
Normal file
|
@ -0,0 +1,32 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2023 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from tempfile import gettempdir
|
||||
import os
|
||||
import shutil
|
||||
|
||||
def get_temp_dir():
|
||||
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||
if not os.path.isdir(tmp_dir):
|
||||
os.mkdir(tmp_dir)
|
||||
return tmp_dir
|
||||
|
||||
|
||||
def del_temp_dir():
|
||||
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||
shutil.rmtree(tmp_dir)
|
|
@ -23,7 +23,6 @@
|
|||
import os
|
||||
import hashlib
|
||||
import json
|
||||
import tempfile
|
||||
from uuid import uuid4
|
||||
from time import time
|
||||
from shutil import move, copyfile
|
||||
|
@ -34,6 +33,7 @@ from flask_login import login_required
|
|||
|
||||
from . import logger, gdriveutils, config, ub, calibre_db, csrf
|
||||
from .admin import admin_required
|
||||
from .file_helper import get_temp_dir
|
||||
|
||||
gdrive = Blueprint('gdrive', __name__, url_prefix='/gdrive')
|
||||
log = logger.create()
|
||||
|
@ -139,9 +139,7 @@ try:
|
|||
dbpath = os.path.join(config.config_calibre_dir, "metadata.db").encode()
|
||||
if not response['deleted'] and response['file']['title'] == 'metadata.db' \
|
||||
and response['file']['md5Checksum'] != hashlib.md5(dbpath): # nosec
|
||||
tmp_dir = os.path.join(tempfile.gettempdir(), 'calibre_web')
|
||||
if not os.path.isdir(tmp_dir):
|
||||
os.mkdir(tmp_dir)
|
||||
tmp_dir = get_temp_dir()
|
||||
|
||||
log.info('Database file updated')
|
||||
copyfile(dbpath, os.path.join(tmp_dir, "metadata.db_" + str(current_milli_time())))
|
||||
|
|
|
@ -34,7 +34,6 @@ except ImportError:
|
|||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.exc import OperationalError, InvalidRequestError, IntegrityError
|
||||
from sqlalchemy.orm.exc import StaleDataError
|
||||
from sqlalchemy.sql.expression import text
|
||||
|
||||
try:
|
||||
from httplib2 import __version__ as httplib2_version
|
||||
|
|
132
cps/helper.py
132
cps/helper.py
|
@ -22,12 +22,13 @@ import random
|
|||
import io
|
||||
import mimetypes
|
||||
import re
|
||||
import regex
|
||||
import shutil
|
||||
import socket
|
||||
from datetime import datetime, timedelta
|
||||
from tempfile import gettempdir
|
||||
import requests
|
||||
import unidecode
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import send_from_directory, make_response, redirect, abort, url_for
|
||||
from flask_babel import gettext as _
|
||||
|
@ -54,12 +55,16 @@ from . import calibre_db, cli_param
|
|||
from .tasks.convert import TaskConvert
|
||||
from . import logger, config, db, ub, fs
|
||||
from . import gdriveutils as gd
|
||||
from .constants import STATIC_DIR as _STATIC_DIR, CACHE_TYPE_THUMBNAILS, THUMBNAIL_TYPE_COVER, THUMBNAIL_TYPE_SERIES
|
||||
from .constants import (STATIC_DIR as _STATIC_DIR, CACHE_TYPE_THUMBNAILS, THUMBNAIL_TYPE_COVER, THUMBNAIL_TYPE_SERIES,
|
||||
SUPPORTED_CALIBRE_BINARIES)
|
||||
from .subproc_wrapper import process_wait
|
||||
from .services.worker import WorkerThread
|
||||
from .tasks.mail import TaskEmail
|
||||
from .tasks.thumbnail import TaskClearCoverThumbnailCache, TaskGenerateCoverThumbnails
|
||||
from .tasks.metadata_backup import TaskBackupMetadata
|
||||
from .file_helper import get_temp_dir
|
||||
from .epub_helper import get_content_opf, create_new_metadata_backup, updateEpub, replace_metadata
|
||||
from .embed_helper import do_calibre_export
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
@ -222,7 +227,7 @@ def send_mail(book_id, book_format, convert, ereader_mail, calibrepath, user_id)
|
|||
email_text = N_("%(book)s send to eReader", book=link)
|
||||
WorkerThread.add(user_id, TaskEmail(_("Send to eReader"), book.path, converted_file_name,
|
||||
config.get_mail_settings(), ereader_mail,
|
||||
email_text, _('This Email has been sent via Calibre-Web.')))
|
||||
email_text, _('This Email has been sent via Calibre-Web.'),book.id))
|
||||
return
|
||||
return _("The requested file could not be read. Maybe wrong permissions?")
|
||||
|
||||
|
@ -689,16 +694,18 @@ def valid_password(check_password):
|
|||
if config.config_password_policy:
|
||||
verify = ""
|
||||
if config.config_password_min_length > 0:
|
||||
verify += "^(?=.{" + str(config.config_password_min_length) + ",}$)"
|
||||
verify += r"^(?=.{" + str(config.config_password_min_length) + ",}$)"
|
||||
if config.config_password_number:
|
||||
verify += "(?=.*?\d)"
|
||||
verify += r"(?=.*?\d)"
|
||||
if config.config_password_lower:
|
||||
verify += "(?=.*?[a-z])"
|
||||
verify += r"(?=.*?[\p{Ll}])"
|
||||
if config.config_password_upper:
|
||||
verify += "(?=.*?[A-Z])"
|
||||
verify += r"(?=.*?[\p{Lu}])"
|
||||
if config.config_password_character:
|
||||
verify += r"(?=.*?[\p{Letter}])"
|
||||
if config.config_password_special:
|
||||
verify += "(?=.*?[^A-Za-z\s0-9])"
|
||||
match = re.match(verify, check_password)
|
||||
verify += r"(?=.*?[^\p{Letter}\s0-9])"
|
||||
match = regex.match(verify, check_password)
|
||||
if not match:
|
||||
raise Exception(_("Password doesn't comply with password validation rules"))
|
||||
return check_password
|
||||
|
@ -921,10 +928,7 @@ def save_cover(img, book_path):
|
|||
return False, _("Only jpg/jpeg files are supported as coverfile")
|
||||
|
||||
if config.config_use_google_drive:
|
||||
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||
|
||||
if not os.path.isdir(tmp_dir):
|
||||
os.mkdir(tmp_dir)
|
||||
tmp_dir = get_temp_dir()
|
||||
ret, message = save_cover_from_filestorage(tmp_dir, "uploaded_cover.jpg", img)
|
||||
if ret is True:
|
||||
gd.uploadFileToEbooksFolder(os.path.join(book_path, 'cover.jpg').replace("\\", "/"),
|
||||
|
@ -938,29 +942,68 @@ def save_cover(img, book_path):
|
|||
|
||||
|
||||
def do_download_file(book, book_format, client, data, headers):
|
||||
book_name = data.name
|
||||
if config.config_use_google_drive:
|
||||
# startTime = time.time()
|
||||
df = gd.getFileFromEbooksFolder(book.path, data.name + "." + book_format)
|
||||
df = gd.getFileFromEbooksFolder(book.path, book_name + "." + book_format)
|
||||
# log.debug('%s', time.time() - startTime)
|
||||
if df:
|
||||
return gd.do_gdrive_download(df, headers)
|
||||
if config.config_embed_metadata and (
|
||||
(book_format == "kepub" and config.config_kepubifypath ) or
|
||||
(book_format != "kepub" and config.config_binariesdir)):
|
||||
output_path = os.path.join(config.config_calibre_dir, book.path)
|
||||
if not os.path.exists(output_path):
|
||||
os.makedirs(output_path)
|
||||
output = os.path.join(config.config_calibre_dir, book.path, book_name + "." + book_format)
|
||||
gd.downloadFile(book.path, book_name + "." + book_format, output)
|
||||
if book_format == "kepub" and config.config_kepubifypath:
|
||||
filename, download_name = do_kepubify_metadata_replace(book, output)
|
||||
elif book_format != "kepub" and config.config_binariesdir:
|
||||
filename, download_name = do_calibre_export(book.id, book_format)
|
||||
else:
|
||||
return gd.do_gdrive_download(df, headers)
|
||||
else:
|
||||
abort(404)
|
||||
else:
|
||||
filename = os.path.join(config.get_book_path(), book.path)
|
||||
if not os.path.isfile(os.path.join(filename, data.name + "." + book_format)):
|
||||
if not os.path.isfile(os.path.join(filename, book_name + "." + book_format)):
|
||||
# ToDo: improve error handling
|
||||
log.error('File not found: %s', os.path.join(filename, data.name + "." + book_format))
|
||||
log.error('File not found: %s', os.path.join(filename, book_name + "." + book_format))
|
||||
|
||||
if client == "kobo" and book_format == "kepub":
|
||||
headers["Content-Disposition"] = headers["Content-Disposition"].replace(".kepub", ".kepub.epub")
|
||||
|
||||
response = make_response(send_from_directory(filename, data.name + "." + book_format))
|
||||
# ToDo Check headers parameter
|
||||
for element in headers:
|
||||
response.headers[element[0]] = element[1]
|
||||
log.info('Downloading file: {}'.format(os.path.join(filename, data.name + "." + book_format)))
|
||||
return response
|
||||
if book_format == "kepub" and config.config_kepubifypath and config.config_embed_metadata:
|
||||
filename, download_name = do_kepubify_metadata_replace(book, os.path.join(filename,
|
||||
book_name + "." + book_format))
|
||||
elif book_format != "kepub" and config.config_binariesdir and config.config_embed_metadata:
|
||||
filename, download_name = do_calibre_export(book.id, book_format)
|
||||
else:
|
||||
download_name = book_name
|
||||
|
||||
response = make_response(send_from_directory(filename, download_name + "." + book_format))
|
||||
# ToDo Check headers parameter
|
||||
for element in headers:
|
||||
response.headers[element[0]] = element[1]
|
||||
log.info('Downloading file: {}'.format(os.path.join(filename, book_name + "." + book_format)))
|
||||
return response
|
||||
|
||||
|
||||
def do_kepubify_metadata_replace(book, file_path):
|
||||
custom_columns = (calibre_db.session.query(db.CustomColumns)
|
||||
.filter(db.CustomColumns.mark_for_delete == 0)
|
||||
.filter(db.CustomColumns.datatype.notin_(db.cc_exceptions))
|
||||
.order_by(db.CustomColumns.label).all())
|
||||
|
||||
tree, cf_name = get_content_opf(file_path)
|
||||
package = create_new_metadata_backup(book, custom_columns, current_user.locale, _("Cover"), lang_type=2)
|
||||
content = replace_metadata(tree, package)
|
||||
tmp_dir = get_temp_dir()
|
||||
temp_file_name = str(uuid4())
|
||||
# open zipfile and replace metadata block in content.opf
|
||||
updateEpub(file_path, os.path.join(tmp_dir, temp_file_name + ".kepub"), cf_name, content)
|
||||
return tmp_dir, temp_file_name
|
||||
|
||||
|
||||
##################################
|
||||
|
||||
|
@ -984,6 +1027,47 @@ def check_unrar(unrar_location):
|
|||
return _('Error executing UnRar')
|
||||
|
||||
|
||||
def check_calibre(calibre_location):
|
||||
if not calibre_location:
|
||||
return
|
||||
|
||||
if not os.path.exists(calibre_location):
|
||||
return _('Could not find the specified directory')
|
||||
|
||||
if not os.path.isdir(calibre_location):
|
||||
return _('Please specify a directory, not a file')
|
||||
|
||||
try:
|
||||
supported_binary_paths = [os.path.join(calibre_location, binary)
|
||||
for binary in SUPPORTED_CALIBRE_BINARIES.values()]
|
||||
binaries_available = [os.path.isfile(binary_path) for binary_path in supported_binary_paths]
|
||||
binaries_executable = [os.access(binary_path, os.X_OK) for binary_path in supported_binary_paths]
|
||||
if all(binaries_available) and all(binaries_executable):
|
||||
values = [process_wait([binary_path, "--version"], pattern=r'\(calibre (.*)\)')
|
||||
for binary_path in supported_binary_paths]
|
||||
if all(values):
|
||||
version = values[0].group(1)
|
||||
log.debug("calibre version %s", version)
|
||||
else:
|
||||
return _('Calibre binaries not viable')
|
||||
else:
|
||||
ret_val = []
|
||||
missing_binaries=[path for path, available in
|
||||
zip(SUPPORTED_CALIBRE_BINARIES.values(), binaries_available) if not available]
|
||||
|
||||
missing_perms=[path for path, available in
|
||||
zip(SUPPORTED_CALIBRE_BINARIES.values(), binaries_executable) if not available]
|
||||
if missing_binaries:
|
||||
ret_val.append(_('Missing calibre binaries: %(missing)s', missing=", ".join(missing_binaries)))
|
||||
if missing_perms:
|
||||
ret_val.append(_('Missing executable permissions: %(missing)s', missing=", ".join(missing_perms)))
|
||||
return ", ".join(ret_val)
|
||||
|
||||
except (OSError, UnicodeDecodeError) as err:
|
||||
log.error_or_exception(err)
|
||||
return _('Error excecuting Calibre')
|
||||
|
||||
|
||||
def json_serial(obj):
|
||||
"""JSON serializer for objects not serializable by default json code"""
|
||||
|
||||
|
@ -1008,7 +1092,7 @@ def tags_filters():
|
|||
|
||||
|
||||
# checks if domain is in database (including wildcards)
|
||||
# example SELECT * FROM @TABLE WHERE 'abcdefg' LIKE Name;
|
||||
# example SELECT * FROM @TABLE WHERE 'abcdefg' LIKE Name;
|
||||
# from https://code.luasoftware.com/tutorials/flask/execute-raw-sql-in-flask-sqlalchemy/
|
||||
# in all calls the email address is checked for validity
|
||||
def check_valid_domain(domain_text):
|
||||
|
|
|
@ -124,7 +124,7 @@ def formatseriesindex_filter(series_index):
|
|||
return int(series_index)
|
||||
else:
|
||||
return series_index
|
||||
except ValueError:
|
||||
except (ValueError, TypeError):
|
||||
return series_index
|
||||
return 0
|
||||
|
||||
|
|
|
@ -156,6 +156,9 @@ def requires_kobo_auth(f):
|
|||
limiter.check()
|
||||
except RateLimitExceeded:
|
||||
return abort(429)
|
||||
except (ConnectionError, Exception) as e:
|
||||
log.error("Connection error to limiter backend: %s", e)
|
||||
return abort(429)
|
||||
user = (
|
||||
ub.session.query(ub.User)
|
||||
.join(ub.RemoteAuthToken)
|
||||
|
|
|
@ -97,12 +97,14 @@ class LubimyCzytac(Metadata):
|
|||
LANGUAGES = f"{CONTAINER}//dt[contains(text(),'Język:')]{SIBLINGS}/text()"
|
||||
DESCRIPTION = f"{CONTAINER}//div[@class='collapse-content']"
|
||||
SERIES = f"{CONTAINER}//span/a[contains(@href,'/cykl/')]/text()"
|
||||
TRANSLATOR = f"{CONTAINER}//dt[contains(text(),'Tłumacz:')]{SIBLINGS}/a/text()"
|
||||
|
||||
DETAILS = "//div[@id='book-details']"
|
||||
PUBLISH_DATE = "//dt[contains(@title,'Data pierwszego wydania"
|
||||
FIRST_PUBLISH_DATE = f"{DETAILS}{PUBLISH_DATE} oryginalnego')]{SIBLINGS}[1]/text()"
|
||||
FIRST_PUBLISH_DATE_PL = f"{DETAILS}{PUBLISH_DATE} polskiego')]{SIBLINGS}[1]/text()"
|
||||
TAGS = "//nav[@aria-label='breadcrumbs']//a[contains(@href,'/ksiazki/k/')]/span/text()"
|
||||
TAGS = "//a[contains(@href,'/ksiazki/t/')]/text()" # "//nav[@aria-label='breadcrumbs']//a[contains(@href,'/ksiazki/k/')]/span/text()"
|
||||
|
||||
|
||||
RATING = "//meta[@property='books:rating:value']/@content"
|
||||
COVER = "//meta[@property='og:image']/@content"
|
||||
|
@ -158,6 +160,7 @@ class LubimyCzytac(Metadata):
|
|||
|
||||
class LubimyCzytacParser:
|
||||
PAGES_TEMPLATE = "<p id='strony'>Książka ma {0} stron(y).</p>"
|
||||
TRANSLATOR_TEMPLATE = "<p id='translator'>Tłumacz: {0}</p>"
|
||||
PUBLISH_DATE_TEMPLATE = "<p id='pierwsze_wydanie'>Data pierwszego wydania: {0}</p>"
|
||||
PUBLISH_DATE_PL_TEMPLATE = (
|
||||
"<p id='pierwsze_wydanie'>Data pierwszego wydania w Polsce: {0}</p>"
|
||||
|
@ -346,5 +349,9 @@ class LubimyCzytacParser:
|
|||
description += LubimyCzytacParser.PUBLISH_DATE_PL_TEMPLATE.format(
|
||||
first_publish_date_pl.strftime("%d.%m.%Y")
|
||||
)
|
||||
translator = self._parse_xpath_node(xpath=LubimyCzytac.TRANSLATOR)
|
||||
if translator:
|
||||
description += LubimyCzytacParser.TRANSLATOR_TEMPLATE.format(translator)
|
||||
|
||||
|
||||
return description
|
||||
|
|
43
cps/opds.py
43
cps/opds.py
|
@ -24,14 +24,14 @@ import datetime
|
|||
import json
|
||||
from urllib.parse import unquote_plus
|
||||
|
||||
from flask import Blueprint, request, render_template, make_response, abort, Response
|
||||
from flask import Blueprint, request, render_template, make_response, abort, Response, g
|
||||
from flask_login import current_user
|
||||
from flask_babel import get_locale
|
||||
from flask_babel import gettext as _
|
||||
from sqlalchemy.sql.expression import func, text, or_, and_, true
|
||||
from sqlalchemy.exc import InvalidRequestError, OperationalError
|
||||
|
||||
from . import logger, config, db, calibre_db, ub, isoLanguages
|
||||
from . import logger, config, db, calibre_db, ub, isoLanguages, constants
|
||||
from .usermanagement import requires_basic_auth_if_no_ano
|
||||
from .helper import get_download_link, get_book_cover
|
||||
from .pagination import Pagination
|
||||
|
@ -94,6 +94,8 @@ def feed_letter_books(book_id):
|
|||
@opds.route("/opds/new")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_new():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_RECENT):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||
db.Books, True, [db.Books.timestamp.desc()],
|
||||
|
@ -104,6 +106,8 @@ def feed_new():
|
|||
@opds.route("/opds/discover")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_discover():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_RANDOM):
|
||||
abort(404)
|
||||
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
|
||||
entries = query.filter(calibre_db.common_filters()).order_by(func.random()).limit(config.config_books_per_page)
|
||||
pagination = Pagination(1, config.config_books_per_page, int(config.config_books_per_page))
|
||||
|
@ -113,6 +117,8 @@ def feed_discover():
|
|||
@opds.route("/opds/rated")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_best_rated():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_BEST_RATED):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||
db.Books, db.Books.ratings.any(db.Ratings.rating > 9),
|
||||
|
@ -124,6 +130,8 @@ def feed_best_rated():
|
|||
@opds.route("/opds/hot")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_hot():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_HOT):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
all_books = ub.session.query(ub.Downloads, func.count(ub.Downloads.book_id)).order_by(
|
||||
func.count(ub.Downloads.book_id).desc()).group_by(ub.Downloads.book_id)
|
||||
|
@ -146,12 +154,16 @@ def feed_hot():
|
|||
@opds.route("/opds/author")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_authorindex():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_AUTHOR):
|
||||
abort(404)
|
||||
return render_element_index(db.Authors.sort, db.books_authors_link, 'opds.feed_letter_author')
|
||||
|
||||
|
||||
@opds.route("/opds/author/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_author(book_id):
|
||||
if not current_user.check_visibility(constants.SIDEBAR_AUTHOR):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Authors.sort).startswith(book_id)
|
||||
entries = calibre_db.session.query(db.Authors).join(db.books_authors_link).join(db.Books)\
|
||||
|
@ -173,6 +185,8 @@ def feed_author(book_id):
|
|||
@opds.route("/opds/publisher")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_publisherindex():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_PUBLISHER):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries = calibre_db.session.query(db.Publishers)\
|
||||
.join(db.books_publishers_link)\
|
||||
|
@ -194,12 +208,16 @@ def feed_publisher(book_id):
|
|||
@opds.route("/opds/category")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_categoryindex():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_CATEGORY):
|
||||
abort(404)
|
||||
return render_element_index(db.Tags.name, db.books_tags_link, 'opds.feed_letter_category')
|
||||
|
||||
|
||||
@opds.route("/opds/category/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_category(book_id):
|
||||
if not current_user.check_visibility(constants.SIDEBAR_CATEGORY):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Tags.name).startswith(book_id)
|
||||
entries = calibre_db.session.query(db.Tags)\
|
||||
|
@ -223,12 +241,16 @@ def feed_category(book_id):
|
|||
@opds.route("/opds/series")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_seriesindex():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_SERIES):
|
||||
abort(404)
|
||||
return render_element_index(db.Series.sort, db.books_series_link, 'opds.feed_letter_series')
|
||||
|
||||
|
||||
@opds.route("/opds/series/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_series(book_id):
|
||||
if not current_user.check_visibility(constants.SIDEBAR_SERIES):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Series.sort).startswith(book_id)
|
||||
entries = calibre_db.session.query(db.Series)\
|
||||
|
@ -258,6 +280,8 @@ def feed_series(book_id):
|
|||
@opds.route("/opds/ratings")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_ratingindex():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_RATING):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries = calibre_db.session.query(db.Ratings, func.count('books_ratings_link.book').label('count'),
|
||||
(db.Ratings.rating / 2).label('name')) \
|
||||
|
@ -284,6 +308,8 @@ def feed_ratings(book_id):
|
|||
@opds.route("/opds/formats")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_formatindex():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_FORMAT):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries = calibre_db.session.query(db.Data).join(db.Books)\
|
||||
.filter(calibre_db.common_filters()) \
|
||||
|
@ -291,7 +317,6 @@ def feed_formatindex():
|
|||
.order_by(db.Data.format).all()
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
len(entries))
|
||||
|
||||
element = list()
|
||||
for entry in entries:
|
||||
element.append(FeedObject(entry.format, entry.format))
|
||||
|
@ -314,6 +339,8 @@ def feed_format(book_id):
|
|||
@opds.route("/opds/language/")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_languagesindex():
|
||||
if not current_user.check_visibility(constants.SIDEBAR_LANGUAGE):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
if current_user.filter_language() == "all":
|
||||
languages = calibre_db.speaking_language()
|
||||
|
@ -341,6 +368,8 @@ def feed_languages(book_id):
|
|||
@opds.route("/opds/shelfindex")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_shelfindex():
|
||||
if not (current_user.is_authenticated or g.allow_anonymous):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
shelf = ub.session.query(ub.Shelf).filter(
|
||||
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all()
|
||||
|
@ -353,6 +382,8 @@ def feed_shelfindex():
|
|||
@opds.route("/opds/shelf/<int:book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_shelf(book_id):
|
||||
if not (current_user.is_authenticated or g.allow_anonymous):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
if current_user.is_anonymous:
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.is_public == 1,
|
||||
|
@ -436,6 +467,8 @@ def feed_get_cover(book_id):
|
|||
@opds.route("/opds/readbooks")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_read_books():
|
||||
if not (current_user.check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not current_user.is_anonymous):
|
||||
return abort(403)
|
||||
off = request.args.get("offset") or 0
|
||||
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, True, True)
|
||||
return render_xml_template('feed.xml', entries=result, pagination=pagination)
|
||||
|
@ -444,6 +477,8 @@ def feed_read_books():
|
|||
@opds.route("/opds/unreadbooks")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_unread_books():
|
||||
if not (current_user.check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not current_user.is_anonymous):
|
||||
return abort(403)
|
||||
off = request.args.get("offset") or 0
|
||||
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, False, True)
|
||||
return render_xml_template('feed.xml', entries=result, pagination=pagination)
|
||||
|
@ -477,7 +512,7 @@ def feed_search(term):
|
|||
def render_xml_template(*args, **kwargs):
|
||||
# ToDo: return time in current timezone similar to %z
|
||||
currtime = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S+00:00")
|
||||
xml = render_template(current_time=currtime, instance=config.config_calibre_web_title, *args, **kwargs)
|
||||
xml = render_template(current_time=currtime, instance=config.config_calibre_web_title, constants=constants.sidebar_settings, *args, **kwargs)
|
||||
response = make_response(xml)
|
||||
response.headers["Content-Type"] = "application/atom+xml; charset=utf-8"
|
||||
return response
|
||||
|
|
21
cps/redirect.py
Normal file → Executable file
21
cps/redirect.py
Normal file → Executable file
|
@ -29,7 +29,7 @@
|
|||
|
||||
from urllib.parse import urlparse, urljoin
|
||||
|
||||
from flask import request, url_for, redirect
|
||||
from flask import request, url_for, redirect, current_app
|
||||
|
||||
|
||||
def is_safe_url(target):
|
||||
|
@ -38,16 +38,15 @@ def is_safe_url(target):
|
|||
return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc
|
||||
|
||||
|
||||
def get_redirect_target():
|
||||
for target in request.values.get('next'), request.referrer:
|
||||
if not target:
|
||||
continue
|
||||
if is_safe_url(target):
|
||||
return target
|
||||
def remove_prefix(text, prefix):
|
||||
if text.startswith(prefix):
|
||||
return text[len(prefix):]
|
||||
return ""
|
||||
|
||||
|
||||
def redirect_back(endpoint, **values):
|
||||
target = request.form['next']
|
||||
if not target or not is_safe_url(target):
|
||||
def get_redirect_location(next, endpoint, **values):
|
||||
target = next or url_for(endpoint, **values)
|
||||
adapter = current_app.url_map.bind(urlparse(request.host_url).netloc)
|
||||
if not len(adapter.allowed_methods(remove_prefix(target, request.environ.get('HTTP_X_SCRIPT_NAME',"")))):
|
||||
target = url_for(endpoint, **values)
|
||||
return redirect(target)
|
||||
return target
|
||||
|
|
|
@ -21,6 +21,7 @@ import datetime
|
|||
from . import config, constants
|
||||
from .services.background_scheduler import BackgroundScheduler, CronTrigger, use_APScheduler
|
||||
from .tasks.database import TaskReconnectDatabase
|
||||
from .tasks.tempFolder import TaskDeleteTempFolder
|
||||
from .tasks.thumbnail import TaskGenerateCoverThumbnails, TaskGenerateSeriesThumbnails, TaskClearCoverThumbnailCache
|
||||
from .services.worker import WorkerThread
|
||||
from .tasks.metadata_backup import TaskBackupMetadata
|
||||
|
@ -31,6 +32,9 @@ def get_scheduled_tasks(reconnect=True):
|
|||
if reconnect:
|
||||
tasks.append([lambda: TaskReconnectDatabase(), 'reconnect', False])
|
||||
|
||||
# Delete temp folder
|
||||
tasks.append([lambda: TaskDeleteTempFolder(), 'delete temp', True])
|
||||
|
||||
# Generate metadata.opf file for each changed book
|
||||
if config.schedule_metadata_backup:
|
||||
tasks.append([lambda: TaskBackupMetadata("en"), 'backup metadata', False])
|
||||
|
@ -65,9 +69,12 @@ def register_scheduled_tasks(reconnect=True):
|
|||
duration = config.schedule_duration
|
||||
|
||||
# Register scheduled tasks
|
||||
scheduler.schedule_tasks(tasks=get_scheduled_tasks(reconnect), trigger=CronTrigger(hour=start))
|
||||
timezone_info = datetime.datetime.now(datetime.timezone.utc).astimezone().tzinfo
|
||||
scheduler.schedule_tasks(tasks=get_scheduled_tasks(reconnect), trigger=CronTrigger(hour=start,
|
||||
timezone=timezone_info))
|
||||
end_time = calclulate_end_time(start, duration)
|
||||
scheduler.schedule(func=end_scheduled_tasks, trigger=CronTrigger(hour=end_time.hour, minute=end_time.minute),
|
||||
scheduler.schedule(func=end_scheduled_tasks, trigger=CronTrigger(hour=end_time.hour, minute=end_time.minute,
|
||||
timezone=timezone_info),
|
||||
name="end scheduled task")
|
||||
|
||||
# Kick-off tasks, if they should currently be running
|
||||
|
@ -86,6 +93,8 @@ def register_startup_tasks():
|
|||
# Ignore tasks that should currently be running, as these will be added when registering scheduled tasks
|
||||
if constants.APP_MODE in ['development', 'test'] and not should_task_be_running(start, duration):
|
||||
scheduler.schedule_tasks_immediately(tasks=get_scheduled_tasks(False))
|
||||
else:
|
||||
scheduler.schedule_tasks_immediately(tasks=[[lambda: TaskDeleteTempFolder(), 'delete temp', True]])
|
||||
|
||||
|
||||
def should_task_be_running(start, duration):
|
||||
|
|
|
@ -37,7 +37,7 @@ except ImportError:
|
|||
from .tornado_wsgi import MyWSGIContainer
|
||||
from tornado.httpserver import HTTPServer
|
||||
from tornado.ioloop import IOLoop
|
||||
from tornado.netutil import bind_unix_socket
|
||||
from tornado import netutil
|
||||
from tornado import version as _version
|
||||
VERSION = 'Tornado ' + _version
|
||||
_GEVENT = False
|
||||
|
@ -256,7 +256,7 @@ class WebServer(object):
|
|||
elif unix_socket_file and os.name != 'nt':
|
||||
self._prepare_unix_socket(unix_socket_file)
|
||||
output = "unix:" + unix_socket_file
|
||||
unix_socket = bind_unix_socket(self.unix_socket_file)
|
||||
unix_socket = netutil.bind_unix_socket(self.unix_socket_file)
|
||||
http_server.add_socket(unix_socket)
|
||||
# ensure current user and group have r/w permissions, no permissions for other users
|
||||
# this way the socket can be shared in a semi-secure manner
|
||||
|
|
|
@ -30,10 +30,10 @@ log = logger.create()
|
|||
|
||||
|
||||
def b64encode_json(json_data):
|
||||
return b64encode(json.dumps(json_data).encode())
|
||||
return b64encode(json.dumps(json_data).encode()).decode("utf-8")
|
||||
|
||||
|
||||
# Python3 has a timestamp() method we could be calling, however it's not avaiable in python2.
|
||||
# Python3 has a timestamp() method we could be calling, however it's not available in python2.
|
||||
def to_epoch_timestamp(datetime_object):
|
||||
return (datetime_object - datetime(1970, 1, 1)).total_seconds()
|
||||
|
||||
|
@ -47,7 +47,7 @@ def get_datetime_from_json(json_object, field_name):
|
|||
|
||||
|
||||
class SyncToken:
|
||||
""" The SyncToken is used to persist state accross requests.
|
||||
""" The SyncToken is used to persist state across requests.
|
||||
When serialized over the response headers, the Kobo device will propagate the token onto following
|
||||
requests to the service. As an example use-case, the SyncToken is used to detect books that have been added
|
||||
to the library since the last time the device synced to the server.
|
||||
|
|
|
@ -18,16 +18,49 @@
|
|||
|
||||
import time
|
||||
from functools import reduce
|
||||
import requests
|
||||
|
||||
from goodreads.client import GoodreadsClient
|
||||
from goodreads.request import GoodreadsRequest
|
||||
import xmltodict
|
||||
|
||||
try:
|
||||
from goodreads.client import GoodreadsClient
|
||||
import Levenshtein
|
||||
except ImportError:
|
||||
from betterreads.client import GoodreadsClient
|
||||
|
||||
try: import Levenshtein
|
||||
except ImportError: Levenshtein = False
|
||||
Levenshtein = False
|
||||
|
||||
from .. import logger
|
||||
from ..clean_html import clean_string
|
||||
|
||||
class my_GoodreadsClient(GoodreadsClient):
|
||||
|
||||
def request(self, *args, **kwargs):
|
||||
"""Create a GoodreadsRequest object and make that request"""
|
||||
req = my_GoodreadsRequest(self, *args, **kwargs)
|
||||
return req.request()
|
||||
|
||||
class GoodreadsRequestException(Exception):
|
||||
def __init__(self, error_msg, url):
|
||||
self.error_msg = error_msg
|
||||
self.url = url
|
||||
|
||||
def __str__(self):
|
||||
return self.url, ':', self.error_msg
|
||||
|
||||
|
||||
class my_GoodreadsRequest(GoodreadsRequest):
|
||||
|
||||
def request(self):
|
||||
resp = requests.get(self.host+self.path, params=self.params,
|
||||
headers={"User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:125.0) "
|
||||
"Gecko/20100101 Firefox/125.0"})
|
||||
if resp.status_code != 200:
|
||||
raise GoodreadsRequestException(resp.reason, self.path)
|
||||
if self.req_format == 'xml':
|
||||
data_dict = xmltodict.parse(resp.content)
|
||||
return data_dict['GoodreadsResponse']
|
||||
else:
|
||||
raise Exception("Invalid format")
|
||||
|
||||
|
||||
log = logger.create()
|
||||
|
@ -38,20 +71,20 @@ _CACHE_TIMEOUT = 23 * 60 * 60 # 23 hours (in seconds)
|
|||
_AUTHORS_CACHE = {}
|
||||
|
||||
|
||||
def connect(key=None, secret=None, enabled=True):
|
||||
def connect(key=None, enabled=True):
|
||||
global _client
|
||||
|
||||
if not enabled or not key or not secret:
|
||||
if not enabled or not key:
|
||||
_client = None
|
||||
return
|
||||
|
||||
if _client:
|
||||
# make sure the configuration has not changed since last we used the client
|
||||
if _client.client_key != key or _client.client_secret != secret:
|
||||
if _client.client_key != key:
|
||||
_client = None
|
||||
|
||||
if not _client:
|
||||
_client = GoodreadsClient(key, secret)
|
||||
_client = my_GoodreadsClient(key, None)
|
||||
|
||||
|
||||
def get_author_info(author_name):
|
||||
|
@ -76,6 +109,7 @@ def get_author_info(author_name):
|
|||
|
||||
if author_info:
|
||||
author_info._timestamp = now
|
||||
author_info.safe_about = clean_string(author_info.about)
|
||||
_AUTHORS_CACHE[author_name] = author_info
|
||||
return author_info
|
||||
|
||||
|
|
|
@ -266,3 +266,6 @@ class CalibreTask:
|
|||
def _handleSuccess(self):
|
||||
self.stat = STAT_FINISH_SUCCESS
|
||||
self.progress = 1
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
|
|
@ -71,6 +71,14 @@ def add_to_shelf(shelf_id, book_id):
|
|||
else:
|
||||
maxOrder = maxOrder[0]
|
||||
|
||||
if not calibre_db.session.query(db.Books).filter(db.Books.id == book_id).one_or_none():
|
||||
log.error("Invalid Book Id: %s. Could not be added to shelf %s", book_id, shelf.name)
|
||||
if not xhr:
|
||||
flash(_("%(book_id)s is a invalid Book Id. Could not be added to Shelf", book_id=book_id),
|
||||
category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return "%s is a invalid Book Id. Could not be added to Shelf" % book_id, 400
|
||||
|
||||
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book_id, order=maxOrder + 1))
|
||||
shelf.last_modified = datetime.utcnow()
|
||||
try:
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 60 KiB After Width: | Height: | Size: 8.9 KiB |
BIN
cps/static/icon.png
Normal file
BIN
cps/static/icon.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 27 KiB |
5
cps/static/icon.svg
Normal file
5
cps/static/icon.svg
Normal file
|
@ -0,0 +1,5 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="140px" height="140px" style="shape-rendering:geometricPrecision; text-rendering:geometricPrecision; image-rendering:optimizeQuality; fill-rule:evenodd; clip-rule:evenodd" xmlns:xlink="http://www.w3.org/1999/xlink">
|
||||
<g><path style="opacity:1" fill="#45b29d" d="M 70.5,5.5 C 87.7691,3.12603 97.4358,10.4594 99.5,27.5C 95.637,46.6972 84.3037,59.1972 65.5,65C 60.9053,66.3929 56.2387,66.7262 51.5,66C 50.0692,65.5348 48.9025,64.7014 48,63.5C 47.3333,60.5 47.3333,57.5 48,54.5C 62.2513,56.0484 73.5846,50.715 82,38.5C 85.0332,33.8945 86.0332,28.8945 85,23.5C 83.0488,19.2854 79.7155,17.2854 75,17.5C 65.5257,19.0759 57.859,23.7425 52,31.5C 38.306,51.6368 33.9727,73.6368 39,97.5C 44.5639,116.532 56.7306,122.699 75.5,116C 80.6017,113.385 85.2684,110.218 89.5,106.5C 95.1927,108.891 96.6927,112.891 94,118.5C 78.4211,132.151 61.2544,134.651 42.5,126C 31.5182,117.21 25.3516,105.71 24,91.5C 20.9978,65.8515 27.3311,42.8515 43,22.5C 50.6154,14.1193 59.7821,8.45258 70.5,5.5 Z"/></g>
|
||||
</svg>
|
After Width: | Height: | Size: 1.2 KiB |
|
@ -81,56 +81,6 @@ if ($("body.book").length > 0) {
|
|||
$(".rating").insertBefore(".hr");
|
||||
$("#remove-from-shelves").insertAfter(".hr");
|
||||
$(description).appendTo(".bookinfo")
|
||||
/* if book description is not in html format, Remove extra line breaks
|
||||
Remove blank lines/unnecessary spaces, split by line break to array
|
||||
Push array into .description div. If there is still a wall of text,
|
||||
find sentences and split wall into groups of three sentence paragraphs.
|
||||
If the book format is in html format, Keep html, but strip away inline
|
||||
styles and empty elements */
|
||||
|
||||
// If text is sitting in div as text node
|
||||
if ($(".comments:has(p)").length === 0) {
|
||||
newdesc = description.text()
|
||||
.replace(/^(?=\n)$|^\s*|\s*$|\n\n+/gm, "").split(/\n/);
|
||||
$(".comments").empty();
|
||||
$.each(newdesc, function (i, val) {
|
||||
$("div.comments").append("<p>" + newdesc[i] + "</p>");
|
||||
});
|
||||
$(".comments").fadeIn(100);
|
||||
} //If still a wall of text create 3 sentence paragraphs.
|
||||
if ($(".comments p").length === 1) {
|
||||
if (description.context != undefined) {
|
||||
newdesc = description.text()
|
||||
.replace(/^(?=\n)$|^\s*|\s*$|\n\n+/gm, "").split(/\n/);
|
||||
} else {
|
||||
newdesc = description.text();
|
||||
}
|
||||
doc = nlp(newdesc.toString());
|
||||
sentences = doc.map((m) => m.out("text"));
|
||||
sentences[0] = sentences[0].replace(",", "");
|
||||
$(".comments p").remove();
|
||||
let size = 3;
|
||||
let sentenceChunks = [];
|
||||
for (var i = 0; i < sentences.length; i += size) {
|
||||
sentenceChunks.push(sentences.slice(i, i + size));
|
||||
}
|
||||
let output = '';
|
||||
$.each(sentenceChunks, function (i, val) {
|
||||
let preOutput = '';
|
||||
$.each(val, function (i, val) {
|
||||
preOutput += val;
|
||||
});
|
||||
output += "<p>" + preOutput + "</p>";
|
||||
});
|
||||
$("div.comments").append(output);
|
||||
} else {
|
||||
$.each(description, function (i, val) {
|
||||
// $( description[i].outerHTML ).appendTo( ".comments" );
|
||||
$("div.comments :empty").remove();
|
||||
$("div.comments ").attr("style", "");
|
||||
});
|
||||
$("div.comments").fadeIn(100);
|
||||
}
|
||||
|
||||
// Sexy blurred backgrounds
|
||||
cover = $(".cover img").attr("src");
|
||||
|
|
|
@ -9,6 +9,7 @@
|
|||
"wordSequences": "Das Passwort enthält Buchstabensequenzen",
|
||||
"wordLowercase": "Bitte mindestens einen Kleinbuchstaben verwenden",
|
||||
"wordUppercase": "Bitte mindestens einen Großbuchstaben verwenden",
|
||||
"word": "Bitte mindestens einen Buchstaben verwenden",
|
||||
"wordOneNumber": "Bitte mindestens eine Ziffern verwenden",
|
||||
"wordOneSpecialChar": "Bitte mindestens ein Sonderzeichen verwenden",
|
||||
"errorList": "Fehler:",
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
"wordRepetitions": "Too many repetitions",
|
||||
"wordSequences": "Your password contains sequences",
|
||||
"wordLowercase": "Use at least one lowercase character",
|
||||
"word": "Use at least one character",
|
||||
"wordUppercase": "Use at least one uppercase character",
|
||||
"wordOneNumber": "Use at least one number",
|
||||
"wordOneSpecialChar": "Use at least one special character",
|
||||
|
|
|
@ -144,13 +144,13 @@ try {
|
|||
|
||||
validation.wordTwoCharacterClasses = function(options, word, score) {
|
||||
var specialCharRE = new RegExp(
|
||||
'(.' + options.rules.specialCharClass + ')'
|
||||
'(.' + options.rules.specialCharClass + ')', 'u'
|
||||
);
|
||||
|
||||
if (
|
||||
word.match(/([a-z].*[A-Z])|([A-Z].*[a-z])/) ||
|
||||
(word.match(/([a-zA-Z])/) && word.match(/([0-9])/)) ||
|
||||
(word.match(specialCharRE) && word.match(/[a-zA-Z0-9_]/))
|
||||
word.match(/(\p{Ll}.*\p{Lu})|(\p{Lu}.*\p{Ll})/u) ||
|
||||
(word.match(/(\p{Letter})/u) && word.match(/([0-9])/)) ||
|
||||
(word.match(specialCharRE) && word.match(/[\p{Letter}0-9_]/u))
|
||||
) {
|
||||
return score;
|
||||
}
|
||||
|
@ -202,11 +202,15 @@ try {
|
|||
};
|
||||
|
||||
validation.wordLowercase = function(options, word, score) {
|
||||
return word.match(/[a-z]/) && score;
|
||||
return word.match(/\p{Ll}/u) && score;
|
||||
};
|
||||
|
||||
validation.wordUppercase = function(options, word, score) {
|
||||
return word.match(/[A-Z]/) && score;
|
||||
return word.match(/\p{Lu}/u) && score;
|
||||
};
|
||||
|
||||
validation.word = function(options, word, score) {
|
||||
return word.match(/\p{Letter}/u) && score;
|
||||
};
|
||||
|
||||
validation.wordOneNumber = function(options, word, score) {
|
||||
|
@ -218,7 +222,7 @@ try {
|
|||
};
|
||||
|
||||
validation.wordOneSpecialChar = function(options, word, score) {
|
||||
var specialCharRE = new RegExp(options.rules.specialCharClass);
|
||||
var specialCharRE = new RegExp(options.rules.specialCharClass, 'u');
|
||||
return word.match(specialCharRE) && score;
|
||||
};
|
||||
|
||||
|
@ -228,27 +232,27 @@ try {
|
|||
options.rules.specialCharClass +
|
||||
'.*' +
|
||||
options.rules.specialCharClass +
|
||||
')'
|
||||
')', 'u'
|
||||
);
|
||||
|
||||
return word.match(twoSpecialCharRE) && score;
|
||||
};
|
||||
|
||||
validation.wordUpperLowerCombo = function(options, word, score) {
|
||||
return word.match(/([a-z].*[A-Z])|([A-Z].*[a-z])/) && score;
|
||||
return word.match(/(\p{Ll}.*\p{Lu})|(\p{Lu}.*\p{Ll})/u) && score;
|
||||
};
|
||||
|
||||
validation.wordLetterNumberCombo = function(options, word, score) {
|
||||
return word.match(/([a-zA-Z])/) && word.match(/([0-9])/) && score;
|
||||
return word.match(/([\p{Letter}])/u) && word.match(/([0-9])/) && score;
|
||||
};
|
||||
|
||||
validation.wordLetterNumberCharCombo = function(options, word, score) {
|
||||
var letterNumberCharComboRE = new RegExp(
|
||||
'([a-zA-Z0-9].*' +
|
||||
'([\p{Letter}0-9].*' +
|
||||
options.rules.specialCharClass +
|
||||
')|(' +
|
||||
options.rules.specialCharClass +
|
||||
'.*[a-zA-Z0-9])'
|
||||
'.*[\p{Letter}0-9])', 'u'
|
||||
);
|
||||
|
||||
return word.match(letterNumberCharComboRE) && score;
|
||||
|
@ -341,6 +345,7 @@ defaultOptions.rules.scores = {
|
|||
wordTwoCharacterClasses: 2,
|
||||
wordRepetitions: -25,
|
||||
wordLowercase: 1,
|
||||
word: 1,
|
||||
wordUppercase: 3,
|
||||
wordOneNumber: 3,
|
||||
wordThreeNumbers: 5,
|
||||
|
@ -361,6 +366,7 @@ defaultOptions.rules.activated = {
|
|||
wordTwoCharacterClasses: true,
|
||||
wordRepetitions: true,
|
||||
wordLowercase: true,
|
||||
word: true,
|
||||
wordUppercase: true,
|
||||
wordOneNumber: true,
|
||||
wordThreeNumbers: true,
|
||||
|
@ -372,7 +378,7 @@ defaultOptions.rules.activated = {
|
|||
wordIsACommonPassword: true
|
||||
};
|
||||
defaultOptions.rules.raisePower = 1.4;
|
||||
defaultOptions.rules.specialCharClass = "(?=.*?[^A-Za-z\s0-9])"; //'[!,@,#,$,%,^,&,*,?,_,~]';
|
||||
defaultOptions.rules.specialCharClass = "(?=.*?[^\\p{Letter}\\s0-9])"; //'[!,@,#,$,%,^,&,*,?,_,~]';
|
||||
// List taken from https://github.com/danielmiessler/SecLists (MIT License)
|
||||
defaultOptions.rules.commonPasswords = [
|
||||
'123456',
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -20,7 +20,7 @@ function getPath() {
|
|||
return jsFileLocation.substr(0, jsFileLocation.search("/static/js/libs/jquery.min.js")); // the js folder path
|
||||
}
|
||||
|
||||
function postButton(event, action){
|
||||
function postButton(event, action, location=""){
|
||||
event.preventDefault();
|
||||
var newForm = jQuery('<form>', {
|
||||
"action": action,
|
||||
|
@ -30,7 +30,14 @@ function postButton(event, action){
|
|||
'name': 'csrf_token',
|
||||
'value': $("input[name=\'csrf_token\']").val(),
|
||||
'type': 'hidden'
|
||||
})).appendTo('body');
|
||||
})).appendTo('body')
|
||||
if(location !== "") {
|
||||
newForm.append(jQuery('<input>', {
|
||||
'name': 'location',
|
||||
'value': location,
|
||||
'type': 'hidden'
|
||||
})).appendTo('body');
|
||||
}
|
||||
newForm.submit();
|
||||
}
|
||||
|
||||
|
@ -212,17 +219,20 @@ $("#delete_confirm").click(function(event) {
|
|||
$( ".navbar" ).after( '<div class="row-fluid text-center" >' +
|
||||
'<div id="flash_'+item.type+'" class="alert alert-'+item.type+'">'+item.message+'</div>' +
|
||||
'</div>');
|
||||
|
||||
}
|
||||
});
|
||||
$("#books-table").bootstrapTable("refresh");
|
||||
}
|
||||
});
|
||||
} else {
|
||||
postButton(event, getPath() + "/delete/" + deleteId);
|
||||
var loc = sessionStorage.getItem("back");
|
||||
if (!loc) {
|
||||
loc = $(this).data("back");
|
||||
}
|
||||
sessionStorage.removeItem("back");
|
||||
postButton(event, getPath() + "/delete/" + deleteId, location=loc);
|
||||
}
|
||||
}
|
||||
|
||||
});
|
||||
|
||||
//triggered when modal is about to be shown
|
||||
|
@ -541,6 +551,7 @@ $(function() {
|
|||
$.get(e.relatedTarget.href).done(function(content) {
|
||||
$modalBody.html(content);
|
||||
preFilters.remove(useCache);
|
||||
$("#back").remove();
|
||||
});
|
||||
})
|
||||
.on("hidden.bs.modal", function() {
|
||||
|
@ -621,8 +632,12 @@ $(function() {
|
|||
"btnfullsync",
|
||||
"GeneralDeleteModal",
|
||||
$(this).data('value'),
|
||||
function(value){
|
||||
path = getPath() + "/ajax/fullsync"
|
||||
function(userid) {
|
||||
if (userid) {
|
||||
path = getPath() + "/ajax/fullsync/" + userid
|
||||
} else {
|
||||
path = getPath() + "/ajax/fullsync"
|
||||
}
|
||||
$.ajax({
|
||||
method:"post",
|
||||
url: path,
|
||||
|
|
|
@ -24,7 +24,7 @@ $(document).ready(function() {
|
|||
},
|
||||
|
||||
}, function () {
|
||||
if ($('#password').data("verify")) {
|
||||
if ($('#password').data("verify") === "True") {
|
||||
// Initialized and ready to go
|
||||
var options = {};
|
||||
options.common = {
|
||||
|
@ -38,22 +38,20 @@ $(document).ready(function() {
|
|||
showVerdicts: false,
|
||||
}
|
||||
options.rules= {
|
||||
specialCharClass: "(?=.*?[^A-Za-z\\s0-9])",
|
||||
specialCharClass: "(?=.*?[^\\p{Letter}\\s0-9])",
|
||||
activated: {
|
||||
wordNotEmail: false,
|
||||
wordMinLength: $('#password').data("min"),
|
||||
// wordMaxLength: false,
|
||||
// wordInvalidChar: true,
|
||||
wordSimilarToUsername: false,
|
||||
wordSequences: false,
|
||||
wordTwoCharacterClasses: false,
|
||||
wordRepetitions: false,
|
||||
wordLowercase: $('#password').data("lower") === "True" ? true : false,
|
||||
wordUppercase: $('#password').data("upper") === "True" ? true : false,
|
||||
word: $('#password').data("word") === "True" ? true : false,
|
||||
wordOneNumber: $('#password').data("number") === "True" ? true : false,
|
||||
wordThreeNumbers: false,
|
||||
wordOneSpecialChar: $('#password').data("special") === "True" ? true : false,
|
||||
// wordTwoSpecialChar: true,
|
||||
wordUpperLowerCombo: false,
|
||||
wordLetterNumberCombo: false,
|
||||
wordLetterNumberCharCombo: false
|
||||
|
|
|
@ -19,8 +19,10 @@
|
|||
import os
|
||||
import re
|
||||
from glob import glob
|
||||
from shutil import copyfile
|
||||
from shutil import copyfile, copyfileobj
|
||||
from markupsafe import escape
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from sqlalchemy.exc import SQLAlchemyError
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
@ -32,13 +34,15 @@ from cps.subproc_wrapper import process_open
|
|||
from flask_babel import gettext as _
|
||||
from cps.kobo_sync_status import remove_synced_book
|
||||
from cps.ub import init_db_thread
|
||||
from cps.file_helper import get_temp_dir
|
||||
|
||||
from cps.tasks.mail import TaskEmail
|
||||
from cps import gdriveutils
|
||||
|
||||
from cps import gdriveutils, helper
|
||||
from cps.constants import SUPPORTED_CALIBRE_BINARIES
|
||||
|
||||
log = logger.create()
|
||||
|
||||
current_milli_time = lambda: int(round(time() * 1000))
|
||||
|
||||
class TaskConvert(CalibreTask):
|
||||
def __init__(self, file_path, book_id, task_message, settings, ereader_mail, user=None):
|
||||
|
@ -61,24 +65,33 @@ class TaskConvert(CalibreTask):
|
|||
data = worker_db.get_book_format(self.book_id, self.settings['old_book_format'])
|
||||
df = gdriveutils.getFileFromEbooksFolder(cur_book.path,
|
||||
data.name + "." + self.settings['old_book_format'].lower())
|
||||
df_cover = gdriveutils.getFileFromEbooksFolder(cur_book.path, "cover.jpg")
|
||||
if df:
|
||||
datafile = os.path.join(config.get_book_path(),
|
||||
cur_book.path,
|
||||
data.name + "." + self.settings['old_book_format'].lower())
|
||||
if df_cover:
|
||||
datafile_cover = os.path.join(config.get_book_path(),
|
||||
cur_book.path, "cover.jpg")
|
||||
if not os.path.exists(os.path.join(config.get_book_path(), cur_book.path)):
|
||||
os.makedirs(os.path.join(config.get_book_path(), cur_book.path))
|
||||
df.GetContentFile(datafile)
|
||||
if df_cover:
|
||||
df_cover.GetContentFile(datafile_cover)
|
||||
worker_db.session.close()
|
||||
else:
|
||||
# ToDo Include cover in error handling
|
||||
error_message = _("%(format)s not found on Google Drive: %(fn)s",
|
||||
format=self.settings['old_book_format'],
|
||||
fn=data.name + "." + self.settings['old_book_format'].lower())
|
||||
worker_db.session.close()
|
||||
return error_message
|
||||
return self._handleError(self, error_message)
|
||||
|
||||
filename = self._convert_ebook_format()
|
||||
if config.config_use_google_drive:
|
||||
os.remove(self.file_path + '.' + self.settings['old_book_format'].lower())
|
||||
if df_cover:
|
||||
os.remove(os.path.join(config.config_calibre_dir, cur_book.path, "cover.jpg"))
|
||||
|
||||
if filename:
|
||||
if config.config_use_google_drive:
|
||||
|
@ -97,6 +110,7 @@ class TaskConvert(CalibreTask):
|
|||
self.ereader_mail,
|
||||
EmailText,
|
||||
self.settings['body'],
|
||||
id=self.book_id,
|
||||
internal=True)
|
||||
)
|
||||
except Exception as ex:
|
||||
|
@ -112,7 +126,7 @@ class TaskConvert(CalibreTask):
|
|||
|
||||
# check to see if destination format already exists - or if book is in database
|
||||
# if it does - mark the conversion task as complete and return a success
|
||||
# this will allow send to E-Reader workflow to continue to work
|
||||
# this will allow to send to E-Reader workflow to continue to work
|
||||
if os.path.isfile(file_path + format_new_ext) or\
|
||||
local_db.get_book_format(self.book_id, self.settings['new_book_format']):
|
||||
log.info("Book id %d already converted to %s", book_id, format_new_ext)
|
||||
|
@ -152,7 +166,8 @@ class TaskConvert(CalibreTask):
|
|||
if not os.path.exists(config.config_converterpath):
|
||||
self._handleError(N_("Calibre ebook-convert %(tool)s not found", tool=config.config_converterpath))
|
||||
return
|
||||
check, error_message = self._convert_calibre(file_path, format_old_ext, format_new_ext)
|
||||
has_cover = local_db.get_book(book_id).has_cover
|
||||
check, error_message = self._convert_calibre(file_path, format_old_ext, format_new_ext, has_cover)
|
||||
|
||||
if check == 0:
|
||||
cur_book = local_db.get_book(book_id)
|
||||
|
@ -194,8 +209,15 @@ class TaskConvert(CalibreTask):
|
|||
return
|
||||
|
||||
def _convert_kepubify(self, file_path, format_old_ext, format_new_ext):
|
||||
if config.config_embed_metadata and config.config_binariesdir:
|
||||
tmp_dir, temp_file_name = helper.do_calibre_export(self.book_id, format_old_ext[1:])
|
||||
filename = os.path.join(tmp_dir, temp_file_name + format_old_ext)
|
||||
temp_file_path = tmp_dir
|
||||
else:
|
||||
filename = file_path + format_old_ext
|
||||
temp_file_path = os.path.dirname(file_path)
|
||||
quotes = [1, 3]
|
||||
command = [config.config_kepubifypath, (file_path + format_old_ext), '-o', os.path.dirname(file_path)]
|
||||
command = [config.config_kepubifypath, filename, '-o', temp_file_path, '-i']
|
||||
try:
|
||||
p = process_open(command, quotes)
|
||||
except OSError as e:
|
||||
|
@ -209,13 +231,12 @@ class TaskConvert(CalibreTask):
|
|||
if p.poll() is not None:
|
||||
break
|
||||
|
||||
# ToD Handle
|
||||
# process returncode
|
||||
check = p.returncode
|
||||
|
||||
# move file
|
||||
if check == 0:
|
||||
converted_file = glob(os.path.join(os.path.dirname(file_path), "*.kepub.epub"))
|
||||
converted_file = glob(os.path.splitext(filename)[0] + "*.kepub.epub")
|
||||
if len(converted_file) == 1:
|
||||
copyfile(converted_file[0], (file_path + format_new_ext))
|
||||
os.unlink(converted_file[0])
|
||||
|
@ -224,16 +245,35 @@ class TaskConvert(CalibreTask):
|
|||
folder=os.path.dirname(file_path))
|
||||
return check, None
|
||||
|
||||
def _convert_calibre(self, file_path, format_old_ext, format_new_ext):
|
||||
def _convert_calibre(self, file_path, format_old_ext, format_new_ext, has_cover):
|
||||
try:
|
||||
# Linux py2.7 encode as list without quotes no empty element for parameters
|
||||
# linux py3.x no encode and as list without quotes no empty element for parameters
|
||||
# windows py2.7 encode as string with quotes empty element for parameters is okay
|
||||
# windows py 3.x no encode and as string with quotes empty element for parameters is okay
|
||||
# separate handling for windows and linux
|
||||
quotes = [1, 2]
|
||||
# path_tmp_opf = self._embed_metadata()
|
||||
if config.config_embed_metadata:
|
||||
quotes = [3, 5]
|
||||
tmp_dir = get_temp_dir()
|
||||
calibredb_binarypath = os.path.join(config.config_binariesdir, SUPPORTED_CALIBRE_BINARIES["calibredb"])
|
||||
my_env = os.environ.copy()
|
||||
if config.config_calibre_split:
|
||||
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||
library_path = config.config_calibre_split_dir
|
||||
else:
|
||||
library_path = config.config_calibre_dir
|
||||
|
||||
opf_command = [calibredb_binarypath, 'show_metadata', '--as-opf', str(self.book_id),
|
||||
'--with-library', library_path]
|
||||
p = process_open(opf_command, quotes, my_env)
|
||||
p.wait()
|
||||
path_tmp_opf = os.path.join(tmp_dir, "metadata_" + str(uuid4()) + ".opf")
|
||||
with open(path_tmp_opf, 'w') as fd:
|
||||
copyfileobj(p.stdout, fd)
|
||||
|
||||
quotes = [1, 2, 4, 6]
|
||||
command = [config.config_converterpath, (file_path + format_old_ext),
|
||||
(file_path + format_new_ext)]
|
||||
if config.config_embed_metadata:
|
||||
command.extend(['--from-opf', path_tmp_opf])
|
||||
if has_cover:
|
||||
command.extend(['--cover', os.path.join(os.path.dirname(file_path), 'cover.jpg')])
|
||||
quotes_index = 3
|
||||
if config.config_calibre:
|
||||
parameters = config.config_calibre.split(" ")
|
||||
|
@ -276,9 +316,9 @@ class TaskConvert(CalibreTask):
|
|||
|
||||
def __str__(self):
|
||||
if self.ereader_mail:
|
||||
return "Convert {} {}".format(self.book_id, self.ereader_mail)
|
||||
return "Convert Book {} and mail it to {}".format(self.book_id, self.ereader_mail)
|
||||
else:
|
||||
return "Convert {}".format(self.book_id)
|
||||
return "Convert Book {}".format(self.book_id)
|
||||
|
||||
@property
|
||||
def is_cancellable(self):
|
||||
|
|
|
@ -28,12 +28,11 @@ from email.message import EmailMessage
|
|||
from email.utils import formatdate, parseaddr
|
||||
from email.generator import Generator
|
||||
from flask_babel import lazy_gettext as N_
|
||||
from email.utils import formatdate
|
||||
|
||||
from cps.services.worker import CalibreTask
|
||||
from cps.services import gmail
|
||||
from cps.embed_helper import do_calibre_export
|
||||
from cps import logger, config
|
||||
|
||||
from cps import gdriveutils
|
||||
import uuid
|
||||
|
||||
|
@ -110,7 +109,7 @@ class EmailSSL(EmailBase, smtplib.SMTP_SSL):
|
|||
|
||||
|
||||
class TaskEmail(CalibreTask):
|
||||
def __init__(self, subject, filepath, attachment, settings, recipient, task_message, text, internal=False):
|
||||
def __init__(self, subject, filepath, attachment, settings, recipient, task_message, text, id=0, internal=False):
|
||||
super(TaskEmail, self).__init__(task_message)
|
||||
self.subject = subject
|
||||
self.attachment = attachment
|
||||
|
@ -119,6 +118,7 @@ class TaskEmail(CalibreTask):
|
|||
self.recipient = recipient
|
||||
self.text = text
|
||||
self.asyncSMTP = None
|
||||
self.book_id = id
|
||||
self.results = dict()
|
||||
|
||||
# from calibre code:
|
||||
|
@ -141,7 +141,7 @@ class TaskEmail(CalibreTask):
|
|||
message['To'] = self.recipient
|
||||
message['Subject'] = self.subject
|
||||
message['Date'] = formatdate(localtime=True)
|
||||
message['Message-Id'] = "{}@{}".format(uuid.uuid4(), self.get_msgid_domain()) # f"<{uuid.uuid4()}@{get_msgid_domain(from_)}>" # make_msgid('calibre-web')
|
||||
message['Message-Id'] = "{}@{}".format(uuid.uuid4(), self.get_msgid_domain())
|
||||
message.set_content(self.text.encode('UTF-8'), "text", "plain")
|
||||
if self.attachment:
|
||||
data = self._get_attachment(self.filepath, self.attachment)
|
||||
|
@ -161,6 +161,8 @@ class TaskEmail(CalibreTask):
|
|||
try:
|
||||
# create MIME message
|
||||
msg = self.prepare_message()
|
||||
if not msg:
|
||||
return
|
||||
if self.settings['mail_server_type'] == 0:
|
||||
self.send_standard_email(msg)
|
||||
else:
|
||||
|
@ -236,10 +238,10 @@ class TaskEmail(CalibreTask):
|
|||
self.asyncSMTP = None
|
||||
self._progress = x
|
||||
|
||||
@classmethod
|
||||
def _get_attachment(cls, book_path, filename):
|
||||
def _get_attachment(self, book_path, filename):
|
||||
"""Get file as MIMEBase message"""
|
||||
calibre_path = config.get_book_path()
|
||||
extension = os.path.splitext(filename)[1][1:]
|
||||
if config.config_use_google_drive:
|
||||
df = gdriveutils.getFileFromEbooksFolder(book_path, filename)
|
||||
if df:
|
||||
|
@ -249,15 +251,22 @@ class TaskEmail(CalibreTask):
|
|||
df.GetContentFile(datafile)
|
||||
else:
|
||||
return None
|
||||
file_ = open(datafile, 'rb')
|
||||
data = file_.read()
|
||||
file_.close()
|
||||
if config.config_binariesdir and config.config_embed_metadata:
|
||||
data_path, data_file = do_calibre_export(self.book_id, extension)
|
||||
datafile = os.path.join(data_path, data_file + "." + extension)
|
||||
with open(datafile, 'rb') as file_:
|
||||
data = file_.read()
|
||||
os.remove(datafile)
|
||||
else:
|
||||
datafile = os.path.join(calibre_path, book_path, filename)
|
||||
try:
|
||||
file_ = open(os.path.join(calibre_path, book_path, filename), 'rb')
|
||||
data = file_.read()
|
||||
file_.close()
|
||||
if config.config_binariesdir and config.config_embed_metadata:
|
||||
data_path, data_file = do_calibre_export(self.book_id, extension)
|
||||
datafile = os.path.join(data_path, data_file + "." + extension)
|
||||
with open(datafile, 'rb') as file_:
|
||||
data = file_.read()
|
||||
if config.config_binariesdir and config.config_embed_metadata:
|
||||
os.remove(datafile)
|
||||
except IOError as e:
|
||||
log.error_or_exception(e, stacklevel=3)
|
||||
log.error('The requested file could not be read. Maybe wrong permissions?')
|
||||
|
|
|
@ -17,26 +17,13 @@
|
|||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
from urllib.request import urlopen
|
||||
from lxml import etree
|
||||
|
||||
|
||||
from cps import config, db, gdriveutils, logger
|
||||
from cps.services.worker import CalibreTask
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
OPF_NAMESPACE = "http://www.idpf.org/2007/opf"
|
||||
PURL_NAMESPACE = "http://purl.org/dc/elements/1.1/"
|
||||
|
||||
OPF = "{%s}" % OPF_NAMESPACE
|
||||
PURL = "{%s}" % PURL_NAMESPACE
|
||||
|
||||
etree.register_namespace("opf", OPF_NAMESPACE)
|
||||
etree.register_namespace("dc", PURL_NAMESPACE)
|
||||
|
||||
OPF_NS = {None: OPF_NAMESPACE} # the default namespace (no prefix)
|
||||
NSMAP = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
|
||||
|
||||
from ..epub_helper import create_new_metadata_backup
|
||||
|
||||
class TaskBackupMetadata(CalibreTask):
|
||||
|
||||
|
@ -101,7 +88,8 @@ class TaskBackupMetadata(CalibreTask):
|
|||
self.calibre_db.session.close()
|
||||
|
||||
def open_metadata(self, book, custom_columns):
|
||||
package = self.create_new_metadata_backup(book, custom_columns)
|
||||
# package = self.create_new_metadata_backup(book, custom_columns)
|
||||
package = create_new_metadata_backup(book, custom_columns, self.export_language, self.translated_title)
|
||||
if config.config_use_google_drive:
|
||||
if not gdriveutils.is_gdrive_ready():
|
||||
raise Exception('Google Drive is configured but not ready')
|
||||
|
@ -123,93 +111,6 @@ class TaskBackupMetadata(CalibreTask):
|
|||
except Exception as ex:
|
||||
raise Exception('Writing Metadata failed with error: {} '.format(ex))
|
||||
|
||||
def create_new_metadata_backup(self, book, custom_columns):
|
||||
# generate root package element
|
||||
package = etree.Element(OPF + "package", nsmap=OPF_NS)
|
||||
package.set("unique-identifier", "uuid_id")
|
||||
package.set("version", "2.0")
|
||||
|
||||
# generate metadata element and all sub elements of it
|
||||
metadata = etree.SubElement(package, "metadata", nsmap=NSMAP)
|
||||
identifier = etree.SubElement(metadata, PURL + "identifier", id="calibre_id", nsmap=NSMAP)
|
||||
identifier.set(OPF + "scheme", "calibre")
|
||||
identifier.text = str(book.id)
|
||||
identifier2 = etree.SubElement(metadata, PURL + "identifier", id="uuid_id", nsmap=NSMAP)
|
||||
identifier2.set(OPF + "scheme", "uuid")
|
||||
identifier2.text = book.uuid
|
||||
title = etree.SubElement(metadata, PURL + "title", nsmap=NSMAP)
|
||||
title.text = book.title
|
||||
for author in book.authors:
|
||||
creator = etree.SubElement(metadata, PURL + "creator", nsmap=NSMAP)
|
||||
creator.text = str(author.name)
|
||||
creator.set(OPF + "file-as", book.author_sort) # ToDo Check
|
||||
creator.set(OPF + "role", "aut")
|
||||
contributor = etree.SubElement(metadata, PURL + "contributor", nsmap=NSMAP)
|
||||
contributor.text = "calibre (5.7.2) [https://calibre-ebook.com]"
|
||||
contributor.set(OPF + "file-as", "calibre") # ToDo Check
|
||||
contributor.set(OPF + "role", "bkp")
|
||||
|
||||
date = etree.SubElement(metadata, PURL + "date", nsmap=NSMAP)
|
||||
date.text = '{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(d=book.pubdate)
|
||||
if book.comments and book.comments[0].text:
|
||||
for b in book.comments:
|
||||
description = etree.SubElement(metadata, PURL + "description", nsmap=NSMAP)
|
||||
description.text = b.text
|
||||
for b in book.publishers:
|
||||
publisher = etree.SubElement(metadata, PURL + "publisher", nsmap=NSMAP)
|
||||
publisher.text = str(b.name)
|
||||
if not book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = self.export_language
|
||||
else:
|
||||
for b in book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = str(b.lang_code)
|
||||
for b in book.tags:
|
||||
subject = etree.SubElement(metadata, PURL + "subject", nsmap=NSMAP)
|
||||
subject.text = str(b.name)
|
||||
etree.SubElement(metadata, "meta", name="calibre:author_link_map",
|
||||
content="{" + ", ".join(['"' + str(a.name) + '": ""' for a in book.authors]) + "}",
|
||||
nsmap=NSMAP)
|
||||
for b in book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series",
|
||||
content=str(str(b.name)),
|
||||
nsmap=NSMAP)
|
||||
if book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series_index",
|
||||
content=str(book.series_index),
|
||||
nsmap=NSMAP)
|
||||
if len(book.ratings) and book.ratings[0].rating > 0:
|
||||
etree.SubElement(metadata, "meta", name="calibre:rating",
|
||||
content=str(book.ratings[0].rating),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:timestamp",
|
||||
content='{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(
|
||||
d=book.timestamp),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:title_sort",
|
||||
content=book.sort,
|
||||
nsmap=NSMAP)
|
||||
sequence = 0
|
||||
for cc in custom_columns:
|
||||
value = None
|
||||
extra = None
|
||||
cc_entry = getattr(book, "custom_column_" + str(cc.id))
|
||||
if cc_entry.__len__():
|
||||
value = [c.value for c in cc_entry] if cc.is_multiple else cc_entry[0].value
|
||||
extra = cc_entry[0].extra if hasattr(cc_entry[0], "extra") else None
|
||||
etree.SubElement(metadata, "meta", name="calibre:user_metadata:#{}".format(cc.label),
|
||||
content=cc.to_json(value, extra, sequence),
|
||||
nsmap=NSMAP)
|
||||
sequence += 1
|
||||
|
||||
# generate guide element and all sub elements of it
|
||||
# Title is translated from default export language
|
||||
guide = etree.SubElement(package, "guide")
|
||||
etree.SubElement(guide, "reference", type="cover", title=self.translated_title, href="cover.jpg")
|
||||
|
||||
return package
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return "Metadata backup"
|
||||
|
|
47
cps/tasks/tempFolder.py
Normal file
47
cps/tasks/tempFolder.py
Normal file
|
@ -0,0 +1,47 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2023 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from urllib.request import urlopen
|
||||
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
from cps import logger, file_helper
|
||||
from cps.services.worker import CalibreTask
|
||||
|
||||
|
||||
class TaskDeleteTempFolder(CalibreTask):
|
||||
def __init__(self, task_message=N_('Delete temp folder contents')):
|
||||
super(TaskDeleteTempFolder, self).__init__(task_message)
|
||||
self.log = logger.create()
|
||||
|
||||
def run(self, worker_thread):
|
||||
try:
|
||||
file_helper.del_temp_dir()
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
except (PermissionError, OSError) as e:
|
||||
self.log.error("Error deleting temp folder: {}".format(e))
|
||||
self._handleSuccess()
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return "Delete Temp Folder"
|
||||
|
||||
@property
|
||||
def is_cancellable(self):
|
||||
return False
|
|
@ -8,8 +8,8 @@
|
|||
<img title="{{author.name}}" src="{{author.image_url}}" alt="{{author.name}}" class="author-photo pull-left">
|
||||
{% endif %}
|
||||
|
||||
{%if author.about is not none %}
|
||||
<p>{{author.about}}</p>
|
||||
{%if author.safe_about is not none %}
|
||||
<p>{{author.safe_about|safe}}</p>
|
||||
{% endif %}
|
||||
|
||||
- {{_("via")}} <a href="{{author.link}}" class="author-link" target="_blank" rel="noopener">Goodreads</a>
|
||||
|
@ -32,7 +32,7 @@
|
|||
</div>
|
||||
<div class="row display-flex">
|
||||
{% for entry in entries %}
|
||||
<div id="books" class="col-sm-3 col-lg-2 col-xs-6 book">
|
||||
<div id="books" class="col-sm-3 col-lg-2 col-xs-6 book session">
|
||||
<div class="cover">
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
<span class="img" title="{{entry.Books.title}}">
|
||||
|
@ -99,7 +99,7 @@
|
|||
<h3>{{_("More by")}} {{ author.name.replace('|',',') }}</h3>
|
||||
<div class="row">
|
||||
{% for entry in other_books %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session">
|
||||
<div class="cover">
|
||||
<a href="https://www.goodreads.com/book/show/{{ entry.gid['#text'] }}" target="_blank" rel="noopener">
|
||||
<img title="{{entry.title}}" src="{{ entry.image_url }}" />
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
</div>
|
||||
<div class="form-group required">
|
||||
<input type="checkbox" id="config_calibre_split" name="config_calibre_split" data-control="split_settings" data-t ="{{ config.config_calibre_split_dir }}" {% if config.config_calibre_split %}checked{% endif %} >
|
||||
<label for="config_calibre_split">{{_('Separate Book files from Library')}}</label>
|
||||
<label for="config_calibre_split">{{_('Separate Book Files from Library')}}</label>
|
||||
</div>
|
||||
<div data-related="split_settings">
|
||||
<div class="form-group required input-group">
|
||||
|
|
35
cps/templates/config_edit.html
Normal file → Executable file
35
cps/templates/config_edit.html
Normal file → Executable file
|
@ -9,7 +9,7 @@
|
|||
<h2>{{title}}</h2>
|
||||
<form role="form" method="POST" autocomplete="off">
|
||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
|
||||
<div class="panel-group col-md-10 col-lg-8">
|
||||
<div class="panel-group col-md-11 col-lg-8">
|
||||
<div class="panel panel-default">
|
||||
<div class="panel-heading">
|
||||
<h4 class="panel-title">
|
||||
|
@ -103,6 +103,10 @@
|
|||
<input type="checkbox" id="config_unicode_filename" name="config_unicode_filename" {% if config.config_unicode_filename %}checked{% endif %}>
|
||||
<label for="config_unicode_filename">{{_('Convert non-English characters in title and author while saving to disk')}}</label>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<input type="checkbox" id="config_embed_metadata" name="config_embed_metadata" {% if config.config_embed_metadata %}checked{% endif %}>
|
||||
<label for="config_embed_metadata">{{_('Embed Metadata to Ebook File on Download/Conversion/e-mail (needs Calibre/Kepubify binaries)')}}</label>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<input type="checkbox" id="config_uploading" data-control="upload_settings" name="config_uploading" {% if config.config_uploading %}checked{% endif %}>
|
||||
<label for="config_uploading">{{_('Enable Uploads')}} {{_('(Please ensure that users also have upload permissions)')}}</label>
|
||||
|
@ -151,17 +155,12 @@
|
|||
<div class="form-group">
|
||||
<input type="checkbox" id="config_use_goodreads" name="config_use_goodreads" data-control="goodreads-settings" {% if config.config_use_goodreads %}checked{% endif %}>
|
||||
<label for="config_use_goodreads">{{_('Use Goodreads')}}</label>
|
||||
<a href="https://www.goodreads.com/api/keys" target="_blank" style="margin-left: 5px">{{_('Create an API Key')}}</a>
|
||||
</div>
|
||||
<div data-related="goodreads-settings">
|
||||
<div class="form-group">
|
||||
<label for="config_goodreads_api_key">{{_('Goodreads API Key')}}</label>
|
||||
<input type="text" class="form-control" id="config_goodreads_api_key" name="config_goodreads_api_key" value="{% if config.config_goodreads_api_key != None %}{{ config.config_goodreads_api_key }}{% endif %}" autocomplete="off">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="config_goodreads_api_secret_e">{{_('Goodreads API Secret')}}</label>
|
||||
<input type="password" class="form-control" id="config_goodreads_api_secret_e" name="config_goodreads_api_secret_e" value="" autocomplete="off">
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
<div class="form-group">
|
||||
|
@ -323,12 +322,12 @@
|
|||
</div>
|
||||
<div id="collapsefive" class="panel-collapse collapse">
|
||||
<div class="panel-body">
|
||||
<label for="config_converterpath">{{_('Path to Calibre E-Book Converter')}}</label>
|
||||
<label for="config_binariesdir">{{_('Path to Calibre Binaries')}}</label>
|
||||
<div class="form-group input-group">
|
||||
<input type="text" class="form-control" id="config_converterpath" name="config_converterpath" value="{% if config.config_converterpath != None %}{{ config.config_converterpath }}{% endif %}" autocomplete="off">
|
||||
<span class="input-group-btn">
|
||||
<button type="button" data-toggle="modal" id="converter_modal_path" data-link="config_converterpath" data-target="#fileModal" class="btn btn-default"><span class="glyphicon glyphicon-folder-open"></span></button>
|
||||
</span>
|
||||
<input type="text" class="form-control" id="config_binariesdir" name="config_binariesdir" value="{% if config.config_binariesdir != None %}{{ config.config_binariesdir }}{% endif %}" autocomplete="off">
|
||||
<span class="input-group-btn">
|
||||
<button type="button" data-toggle="modal" id="binaries_modal_path" data-link="config_binariesdir" data-folderonly="true" data-target="#fileModal" class="btn btn-default"><span class="glyphicon glyphicon-folder-open"></span></button>
|
||||
</span>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="config_calibre">{{_('Calibre E-Book Converter Settings')}}</label>
|
||||
|
@ -368,6 +367,16 @@
|
|||
<input type="checkbox" id="config_ratelimiter" name="config_ratelimiter" {% if config.config_ratelimiter %}checked{% endif %}>
|
||||
<label for="config_ratelimiter">{{_('Limit failed login attempts')}}</label>
|
||||
</div>
|
||||
<div data-related="ratelimiter_settings">
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<label for="config_calibre">{{_('Configure Backend for Limiter')}}</label>
|
||||
<input type="text" class="form-control" id="config_limiter_uri" name="config_limiter_uri" value="{% if config.config_limiter_uri != None %}{{ config.config_limiter_uri }}{% endif %}" autocomplete="off">
|
||||
</div>
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<label for="config_calibre">{{_('Options for Limiter')}}</label>
|
||||
<input type="text" class="form-control" id="config_limiter_options" name="config_limiter_options" value="{% if config.config_limiter_options != None %}{{ config.config_limiter_options }}{% endif %}" autocomplete="off">
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="config_session">{{_('Session protection')}}</label>
|
||||
<select name="config_session" id="config_session" class="form-control">
|
||||
|
@ -396,6 +405,10 @@
|
|||
<input type="checkbox" id="config_password_upper" name="config_password_upper" {% if config.config_password_upper %}checked{% endif %}>
|
||||
<label for="config_password_upper">{{_('Enforce uppercase characters')}}</label>
|
||||
</div>
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<input type="checkbox" id="config_password_character" name="config_password_character" {% if config.config_password_character %}checked{% endif %}>
|
||||
<label for="config_password_lower">{{_('Enforce characters (needed For Chinese/Japanese/Korean Characters)')}}</label>
|
||||
</div>
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<input type="checkbox" id="config_password_special" name="config_password_special" {% if config.config_password_special %}checked{% endif %}>
|
||||
<label for="config_password_special">{{_('Enforce special characters')}}</label>
|
||||
|
|
16
cps/templates/detail.html
Executable file → Normal file
16
cps/templates/detail.html
Executable file → Normal file
|
@ -205,8 +205,8 @@
|
|||
|
||||
|
||||
{% for c in cc %}
|
||||
<div class="real_custom_columns">
|
||||
{% if entry['custom_column_' ~ c.id]|length > 0 %}
|
||||
{% if entry['custom_column_' ~ c.id]|length > 0 %}
|
||||
<div class="real_custom_columns">
|
||||
{{ c.name }}:
|
||||
{% for column in entry['custom_column_' ~ c.id] %}
|
||||
{% if c.datatype == 'rating' %}
|
||||
|
@ -235,8 +235,9 @@
|
|||
{% endif %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% if not current_user.is_anonymous %}
|
||||
|
@ -332,15 +333,15 @@
|
|||
|
||||
{% endif %}
|
||||
{% if current_user.role_edit() %}
|
||||
<div class="btn-toolbar" role="toolbar">
|
||||
<div class="col-sm-12">
|
||||
<div class="btn-group" role="group" aria-label="Edit/Delete book">
|
||||
<a href="{{ url_for('edit-book.show_edit_book', book_id=entry.id) }}"
|
||||
class="btn btn-sm btn-primary" id="edit_book" role="button"><span
|
||||
class="glyphicon glyphicon-edit"></span> {{ _('Edit Metadata') }}</a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="btn btn-default" data-back="{{ url_for('web.index') }}" id="back">{{_('Cancel')}}</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -366,4 +367,3 @@
|
|||
</script>
|
||||
|
||||
{% endblock %}
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
<h2 class="random-books">{{_('Discover (Random Books)')}}</h2>
|
||||
<div class="row display-flex">
|
||||
{% for entry in random %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book" id="books_rand">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session" id="books_rand">
|
||||
<div class="cover">
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
<span class="img" title="{{ entry.Books.title }}">
|
||||
|
@ -89,7 +89,7 @@
|
|||
<div class="row display-flex">
|
||||
{% if entries[0] %}
|
||||
{% for entry in entries %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book" id="books">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session" id="books">
|
||||
<div class="cover">
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
<span class="img" title="{{ entry.Books.title }}">
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Books sorted alphabetically')}}</content>
|
||||
</entry>
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_HOT) %}
|
||||
<entry>
|
||||
<title>{{_('Hot Books')}}</title>
|
||||
<link href="{{url_for('opds.feed_hot')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -29,6 +30,8 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Popular publications from this catalog based on Downloads.')}}</content>
|
||||
</entry>
|
||||
{%endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_BEST_RATED) %}
|
||||
<entry>
|
||||
<title>{{_('Top Rated Books')}}</title>
|
||||
<link href="{{url_for('opds.feed_best_rated')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -36,6 +39,8 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Popular publications from this catalog based on Rating.')}}</content>
|
||||
</entry>
|
||||
{%endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_RECENT) %}
|
||||
<entry>
|
||||
<title>{{_('Recently added Books')}}</title>
|
||||
<link href="{{url_for('opds.feed_new')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -43,6 +48,8 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('The latest Books')}}</content>
|
||||
</entry>
|
||||
{%endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_RANDOM) %}
|
||||
<entry>
|
||||
<title>{{_('Random Books')}}</title>
|
||||
<link href="{{url_for('opds.feed_discover')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -50,7 +57,8 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Show Random Books')}}</content>
|
||||
</entry>
|
||||
{% if not current_user.is_anonymous %}
|
||||
{%endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_READ_AND_UNREAD) and not current_user.is_anonymous %}
|
||||
<entry>
|
||||
<title>{{_('Read Books')}}</title>
|
||||
<link href="{{url_for('opds.feed_read_books')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -66,6 +74,7 @@
|
|||
<content type="text">{{_('Unread Books')}}</content>
|
||||
</entry>
|
||||
{% endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_AUTHOR) %}
|
||||
<entry>
|
||||
<title>{{_('Authors')}}</title>
|
||||
<link href="{{url_for('opds.feed_authorindex')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -73,13 +82,17 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Books ordered by Author')}}</content>
|
||||
</entry>
|
||||
<entry>
|
||||
{% endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_PUBLISHER) %}
|
||||
<entry>
|
||||
<title>{{_('Publishers')}}</title>
|
||||
<link href="{{url_for('opds.feed_publisherindex')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
<id>{{url_for('opds.feed_publisherindex')}}</id>
|
||||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Books ordered by publisher')}}</content>
|
||||
</entry>
|
||||
{% endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_CATEGORY) %}
|
||||
<entry>
|
||||
<title>{{_('Categories')}}</title>
|
||||
<link href="{{url_for('opds.feed_categoryindex')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -87,6 +100,8 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Books ordered by category')}}</content>
|
||||
</entry>
|
||||
{% endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_SERIES) %}
|
||||
<entry>
|
||||
<title>{{_('Series')}}</title>
|
||||
<link href="{{url_for('opds.feed_seriesindex')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -94,6 +109,8 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Books ordered by series')}}</content>
|
||||
</entry>
|
||||
{% endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_LANGUAGE) %}
|
||||
<entry>
|
||||
<title>{{_('Languages')}}</title>
|
||||
<link href="{{url_for('opds.feed_languagesindex')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -101,6 +118,8 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Books ordered by Languages')}}</content>
|
||||
</entry>
|
||||
{% endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_RATING) %}
|
||||
<entry>
|
||||
<title>{{_('Ratings')}}</title>
|
||||
<link href="{{url_for('opds.feed_ratingindex')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -108,7 +127,8 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Books ordered by Rating')}}</content>
|
||||
</entry>
|
||||
|
||||
{% endif %}
|
||||
{% if current_user.check_visibility(g.constants.SIDEBAR_FORMAT) %}
|
||||
<entry>
|
||||
<title>{{_('File formats')}}</title>
|
||||
<link href="{{url_for('opds.feed_formatindex')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -116,6 +136,8 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Books ordered by file formats')}}</content>
|
||||
</entry>
|
||||
{% endif %}
|
||||
{% if current_user.is_authenticated or g.allow_anonymous %}
|
||||
<entry>
|
||||
<title>{{_('Shelves')}}</title>
|
||||
<link href="{{url_for('opds.feed_shelfindex')}}" type="application/atom+xml;profile=opds-catalog"/>
|
||||
|
@ -123,4 +145,5 @@
|
|||
<updated>{{ current_time }}</updated>
|
||||
<content type="text">{{_('Books organized in shelves')}}</content>
|
||||
</entry>
|
||||
{% endif %}
|
||||
</feed>
|
||||
|
|
|
@ -41,7 +41,7 @@
|
|||
|
||||
<div class="row display-flex">
|
||||
{% for entry in entries %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session">
|
||||
<div class="cover">
|
||||
{% if entry.Books.has_cover is defined %}
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
|
|
|
@ -31,7 +31,7 @@
|
|||
{% endif %}
|
||||
<div class="row display-flex">
|
||||
{% for entry in entries %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session">
|
||||
<div class="cover">
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
<span class="img" title="{{entry.Books.title}}" >
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
{% endif %}
|
||||
<div class="form-group">
|
||||
<label for="password">{{_('Password')}}</label>
|
||||
<input type="password" class="form-control" name="password" id="password" data-lang="{{ current_user.locale }}" data-verify="{{ config.config_password_policy }}" {% if config.config_password_policy %} data-min={{ config.config_password_min_length }} data-special={{ config.config_password_special }} data-upper={{ config.config_password_upper }} data-lower={{ config.config_password_lower }} data-number={{ config.config_password_number }}{% endif %} value="" autocomplete="off">
|
||||
<input type="password" class="form-control" name="password" id="password" data-lang="{{ current_user.locale }}" data-verify="{{ config.config_password_policy }}" {% if config.config_password_policy %} data-min={{ config.config_password_min_length }} data-word={{ config.config_password_character }} data-special={{ config.config_password_special }} data-upper={{ config.config_password_upper }} data-lower={{ config.config_password_lower }} data-number={{ config.config_password_number }}{% endif %} value="" autocomplete="off">
|
||||
</div>
|
||||
{% endif %}
|
||||
<div class="form-group">
|
||||
|
@ -67,7 +67,7 @@
|
|||
<div class="btn btn-danger" id="config_delete_kobo_token" data-value="{{ content.id }}" data-remote="false" {% if not content.remote_auth_token.first() %} style="display: none;" {% endif %}>{{_('Delete')}}</div>
|
||||
</div>
|
||||
<div class="form-group col">
|
||||
<div class="btn btn-default" id="kobo_full_sync" data-value="{{ content.id }}" {% if not content.remote_auth_token.first() %} style="display: none;" {% endif %}>{{_('Force full kobo sync')}}</div>
|
||||
<div class="btn btn-default" id="kobo_full_sync" data-value="{% if current_user.role_admin() %}{{ content.id }}{% else %}0{% endif %}" {% if not content.remote_auth_token.first() %} style="display: none;" {% endif %}>{{_('Force full kobo sync')}}</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
<div class="col-sm-6">
|
||||
|
@ -177,7 +177,7 @@
|
|||
<script src="{{ url_for('static', filename='js/libs/bootstrap-table/bootstrap-editable.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/pwstrength/i18next.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/pwstrength/i18nextHttpBackend.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/pwstrength/pwstrength-bootstrap.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/pwstrength/pwstrength-bootstrap.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/password.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/table.js') }}"></script>
|
||||
{% endblock %}
|
||||
|
|
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user