Resolve merge conflicts
This commit is contained in:
commit
04a5db5c1d
3
.gitignore
vendored
3
.gitignore
vendored
|
@ -6,6 +6,7 @@ __pycache__/
|
|||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
.python-version
|
||||
env/
|
||||
venv/
|
||||
eggs/
|
||||
|
@ -31,4 +32,4 @@ cps/cache
|
|||
settings.yaml
|
||||
gdrive_credentials
|
||||
client_secrets.json
|
||||
|
||||
gmail.json
|
||||
|
|
|
@ -41,6 +41,6 @@ Open a new GitHub pull request with the patch. Ensure the PR description clearly
|
|||
|
||||
In case your code enhances features of Calibre-Web: Create your pull request for the development branch if your enhancement consists of more than some lines of code in a local section of Calibre-Webs code. This makes it easier to test it and check all implication before it's made public.
|
||||
|
||||
Please check if your code runs on Python 2.7 (still necessary in 2020) and mainly on python 3. If possible and the feature is related to operating system functions, try to check it on Windows and Linux.
|
||||
Calibre-Web is automatically tested on Linux in combination with python 3.7. The code for testing is in a [separate repo](https://github.com/OzzieIsaacs/calibre-web-test) on Github. It uses unit tests and performs real system tests with selenium; it would be great if you could consider also writing some tests.
|
||||
Please check if your code runs with python 3, python 2 is no longer supported. If possible and the feature is related to operating system functions, try to check it on Windows and Linux.
|
||||
Calibre-Web is automatically tested on Linux in combination with python 3.8. The code for testing is in a [separate repo](https://github.com/OzzieIsaacs/calibre-web-test) on Github. It uses unit tests and performs real system tests with selenium; it would be great if you could consider also writing some tests.
|
||||
A static code analysis is done by Codacy, but it's partly broken and doesn't run automatically. You could check your code with ESLint before contributing, a configuration file can be found in the projects root folder.
|
||||
|
|
30
README.md
30
README.md
|
@ -2,6 +2,13 @@
|
|||
|
||||
Calibre-Web is a web app providing a clean interface for browsing, reading and downloading eBooks using an existing [Calibre](https://calibre-ebook.com) database.
|
||||
|
||||
[![GitHub License](https://img.shields.io/github/license/janeczku/calibre-web?style=flat-square)](https://github.com/janeczku/calibre-web/blob/master/LICENSE)
|
||||
[![GitHub commit activity](https://img.shields.io/github/commit-activity/w/janeczku/calibre-web?logo=github&style=flat-square&label=commits)]()
|
||||
[![GitHub all releases](https://img.shields.io/github/downloads/janeczku/calibre-web/total?logo=github&style=flat-square)](https://github.com/janeczku/calibre-web/releases)
|
||||
[![PyPI](https://img.shields.io/pypi/v/calibreweb?logo=pypi&logoColor=fff&style=flat-square)](https://pypi.org/project/calibreweb/)
|
||||
[![PyPI - Downloads](https://img.shields.io/pypi/dm/calibreweb?logo=pypi&logoColor=fff&style=flat-square)](https://pypi.org/project/calibreweb/)
|
||||
[![Discord](https://img.shields.io/discord/838810113564344381?label=Discord&logo=discord&style=flat-square)](https://discord.gg/h2VsJ2NEfB)
|
||||
|
||||
*This software is a fork of [library](https://github.com/mutschler/calibreserver) and licensed under the GPL v3 License.*
|
||||
|
||||
![Main screen](https://github.com/janeczku/calibre-web/wiki/images/main_screen.png)
|
||||
|
@ -12,7 +19,7 @@ Calibre-Web is a web app providing a clean interface for browsing, reading and d
|
|||
- full graphical setup
|
||||
- User management with fine-grained per-user permissions
|
||||
- Admin interface
|
||||
- User Interface in czech, dutch, english, finnish, french, german, greek, hungarian, italian, japanese, khmer, polish, russian, simplified chinese, spanish, swedish, turkish, ukrainian
|
||||
- User Interface in brazilian, czech, dutch, english, finnish, french, german, greek, hungarian, italian, japanese, khmer, polish, russian, simplified chinese, spanish, swedish, turkish, ukrainian
|
||||
- OPDS feed for eBook reader apps
|
||||
- Filter and search by titles, authors, tags, series and language
|
||||
- Create a custom book collection (shelves)
|
||||
|
@ -32,12 +39,19 @@ Calibre-Web is a web app providing a clean interface for browsing, reading and d
|
|||
|
||||
## Quick start
|
||||
|
||||
#### Install via pip
|
||||
1. Install calibre web via pip with the command `pip install calibreweb` (Depending on your OS and or distro the command could also be `pip3`).
|
||||
2. Optional features can also be installed via pip, please refer to [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-Windows) for details
|
||||
3. Calibre-Web can be started afterwards by typing `cps` or `python3 -m cps`
|
||||
|
||||
#### Manual installation
|
||||
1. Install dependencies by running `pip3 install --target vendor -r requirements.txt` (python3.x). Alternativly set up a python virtual environment.
|
||||
2. Execute the command: `python3 cps.py` (or `nohup python3 cps.py` - recommended if you want to exit the terminal window)
|
||||
3. Point your browser to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog
|
||||
4. Set `Location of Calibre database` to the path of the folder where your Calibre library (metadata.db) lives, push "submit" button\
|
||||
Optionally a Google Drive can be used to host the calibre library [-> Using Google Drive integration](https://github.com/janeczku/calibre-web/wiki/Configuration#using-google-drive-integration)
|
||||
5. Go to Login page
|
||||
|
||||
Point your browser to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog
|
||||
Set `Location of Calibre database` to the path of the folder where your Calibre library (metadata.db) lives, push "submit" button\
|
||||
Optionally a Google Drive can be used to host the calibre library [-> Using Google Drive integration](https://github.com/janeczku/calibre-web/wiki/Configuration#using-google-drive-integration)
|
||||
Go to Login page
|
||||
|
||||
**Default admin login:**\
|
||||
*Username:* admin\
|
||||
|
@ -48,7 +62,7 @@ Please note that running the above install command can fail on some versions of
|
|||
|
||||
## Requirements
|
||||
|
||||
python 3.x+
|
||||
python 3.5+
|
||||
|
||||
Optionally, to enable on-the-fly conversion from one ebook format to another when using the send-to-kindle feature, or during editing of ebooks metadata:
|
||||
|
||||
|
@ -80,7 +94,9 @@ Pre-built Docker images are available in these Docker Hub repositories:
|
|||
+ The "path to convertertool" should be set to `/usr/bin/ebook-convert`
|
||||
+ The "path to unrar" should be set to `/usr/bin/unrar`
|
||||
|
||||
# Wiki
|
||||
# Contact
|
||||
|
||||
Just reach us out on [Discord](https://discord.gg/h2VsJ2NEfB)
|
||||
|
||||
For further information, How To's and FAQ please check the [Wiki](https://github.com/janeczku/calibre-web/wiki)
|
||||
|
||||
|
|
5
SECURITY.md
Normal file
5
SECURITY.md
Normal file
|
@ -0,0 +1,5 @@
|
|||
# Security Policy
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Please report security issues to ozzie.fernandez.isaacs@googlemail.com
|
3
cps.py
3
cps.py
|
@ -42,6 +42,7 @@ from cps.admin import admi
|
|||
from cps.gdrive import gdrive
|
||||
from cps.editbooks import editbook
|
||||
from cps.remotelogin import remotelogin
|
||||
from cps.search_metadata import meta
|
||||
from cps.error_handler import init_errorhandler
|
||||
from cps.schedule import register_jobs
|
||||
|
||||
|
@ -71,7 +72,7 @@ def main():
|
|||
app.register_blueprint(shelf)
|
||||
app.register_blueprint(admi)
|
||||
app.register_blueprint(remotelogin)
|
||||
# if config.config_use_google_drive:
|
||||
app.register_blueprint(meta)
|
||||
app.register_blueprint(gdrive)
|
||||
app.register_blueprint(editbook)
|
||||
if kobo_available:
|
||||
|
|
|
@ -37,6 +37,11 @@ from . import config_sql, logger, cache_buster, cli, ub, db
|
|||
from .reverseproxy import ReverseProxied
|
||||
from .server import WebServer
|
||||
|
||||
try:
|
||||
import lxml
|
||||
lxml_present = True
|
||||
except ImportError:
|
||||
lxml_present = False
|
||||
|
||||
mimetypes.init()
|
||||
mimetypes.add_type('application/xhtml+xml', '.xhtml')
|
||||
|
@ -83,11 +88,23 @@ log = logger.create()
|
|||
|
||||
from . import services
|
||||
|
||||
db.CalibreDB.setup_db(config, cli.settingspath)
|
||||
db.CalibreDB.update_config(config)
|
||||
db.CalibreDB.setup_db(config.config_calibre_dir, cli.settingspath)
|
||||
|
||||
|
||||
calibre_db = db.CalibreDB()
|
||||
|
||||
def create_app():
|
||||
if sys.version_info < (3, 0):
|
||||
log.info(
|
||||
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, please update your installation to Python3 ***')
|
||||
print(
|
||||
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, please update your installation to Python3 ***')
|
||||
sys.exit(5)
|
||||
if not lxml_present:
|
||||
log.info('*** "lxml" is needed for calibre-web to run. Please install it using pip: "pip install lxml" ***')
|
||||
print('*** "lxml" is needed for calibre-web to run. Please install it using pip: "pip install lxml" ***')
|
||||
sys.exit(6)
|
||||
app.wsgi_app = ReverseProxied(app.wsgi_app)
|
||||
# For python2 convert path to unicode
|
||||
if sys.version_info < (3, 0):
|
||||
|
@ -97,11 +114,9 @@ def create_app():
|
|||
|
||||
if os.environ.get('FLASK_DEBUG'):
|
||||
cache_buster.init_cache_busting(app)
|
||||
cache_buster.init_cache_busting(app)
|
||||
|
||||
log.info('Starting Calibre Web...')
|
||||
if sys.version_info < (3, 0):
|
||||
log.info('Python2 is EOL since end of 2019, this version of Calibre-Web supporting Python2 please consider upgrading to Python3')
|
||||
print('Python2 is EOL since end of 2019, this version of Calibre-Web supporting Python2 please consider upgrading to Python3')
|
||||
Principal(app)
|
||||
lm.init_app(app)
|
||||
app.secret_key = os.getenv('SECRET_KEY', config_sql.get_flask_session_key(ub.session))
|
||||
|
@ -126,9 +141,8 @@ def create_app():
|
|||
def get_locale():
|
||||
# if a user is logged in, use the locale from the user settings
|
||||
user = getattr(g, 'user', None)
|
||||
# user = None
|
||||
if user is not None and hasattr(user, "locale"):
|
||||
if user.nickname != 'Guest': # if the account is the guest account bypass the config lang settings
|
||||
if user.name != 'Guest': # if the account is the guest account bypass the config lang settings
|
||||
return user.locale
|
||||
|
||||
preferred = list()
|
||||
|
@ -147,6 +161,7 @@ def get_timezone():
|
|||
user = getattr(g, 'user', None)
|
||||
return user.timezone if user else None
|
||||
|
||||
|
||||
from .updater import Updater
|
||||
updater_thread = Updater()
|
||||
updater_thread.start()
|
||||
|
|
|
@ -54,6 +54,12 @@ try:
|
|||
except ImportError:
|
||||
greenlet_Version = None
|
||||
|
||||
try:
|
||||
from scholarly import scholarly
|
||||
scholarly_version = _(u'installed')
|
||||
except ImportError:
|
||||
scholarly_version = _(u'not installed')
|
||||
|
||||
from . import services
|
||||
|
||||
about = flask.Blueprint('about', __name__)
|
||||
|
@ -79,6 +85,7 @@ _VERSIONS = OrderedDict(
|
|||
iso639=isoLanguages.__version__,
|
||||
pytz=pytz.__version__,
|
||||
Unidecode = unidecode_version,
|
||||
Scholarly = scholarly_version,
|
||||
Flask_SimpleLDAP = u'installed' if bool(services.ldap) else None,
|
||||
python_LDAP = services.ldapVersion if bool(services.ldapVersion) else None,
|
||||
Goodreads = u'installed' if bool(services.goodreads_support) else None,
|
||||
|
|
963
cps/admin.py
963
cps/admin.py
File diff suppressed because it is too large
Load Diff
28
cps/cli.py
28
cps/cli.py
|
@ -45,7 +45,7 @@ parser.add_argument('-v', '--version', action='version', help='Shows version num
|
|||
version=version_info())
|
||||
parser.add_argument('-i', metavar='ip-address', help='Server IP-Address to listen')
|
||||
parser.add_argument('-s', metavar='user:pass', help='Sets specific username to new password')
|
||||
parser.add_argument('-f', action='store_true', help='Enables filepicker in unconfigured mode')
|
||||
parser.add_argument('-f', action='store_true', help='Flag is depreciated and will be removed in next version')
|
||||
args = parser.parse_args()
|
||||
|
||||
if sys.version_info < (3, 0):
|
||||
|
@ -71,7 +71,7 @@ if args.c:
|
|||
if os.path.isfile(args.c):
|
||||
certfilepath = args.c
|
||||
else:
|
||||
print("Certfilepath is invalid. Exiting...")
|
||||
print("Certfile path is invalid. Exiting...")
|
||||
sys.exit(1)
|
||||
|
||||
if args.c == "":
|
||||
|
@ -81,7 +81,7 @@ if args.k:
|
|||
if os.path.isfile(args.k):
|
||||
keyfilepath = args.k
|
||||
else:
|
||||
print("Keyfilepath is invalid. Exiting...")
|
||||
print("Keyfile path is invalid. Exiting...")
|
||||
sys.exit(1)
|
||||
|
||||
if (args.k and not args.c) or (not args.k and args.c):
|
||||
|
@ -91,29 +91,29 @@ if (args.k and not args.c) or (not args.k and args.c):
|
|||
if args.k == "":
|
||||
keyfilepath = ""
|
||||
|
||||
# handle and check ipadress argument
|
||||
ipadress = args.i or None
|
||||
if ipadress:
|
||||
# handle and check ip address argument
|
||||
ip_address = args.i or None
|
||||
if ip_address:
|
||||
try:
|
||||
# try to parse the given ip address with socket
|
||||
if hasattr(socket, 'inet_pton'):
|
||||
if ':' in ipadress:
|
||||
socket.inet_pton(socket.AF_INET6, ipadress)
|
||||
if ':' in ip_address:
|
||||
socket.inet_pton(socket.AF_INET6, ip_address)
|
||||
else:
|
||||
socket.inet_pton(socket.AF_INET, ipadress)
|
||||
socket.inet_pton(socket.AF_INET, ip_address)
|
||||
else:
|
||||
# on windows python < 3.4, inet_pton is not available
|
||||
# inet_atom only handles IPv4 addresses
|
||||
socket.inet_aton(ipadress)
|
||||
socket.inet_aton(ip_address)
|
||||
except socket.error as err:
|
||||
print(ipadress, ':', err)
|
||||
print(ip_address, ':', err)
|
||||
sys.exit(1)
|
||||
|
||||
# handle and check user password argument
|
||||
user_credentials = args.s or None
|
||||
if user_credentials and ":" not in user_credentials:
|
||||
print("No valid username:password format")
|
||||
print("No valid 'username:password' format")
|
||||
sys.exit(3)
|
||||
|
||||
# Handles enableing of filepicker
|
||||
filepicker = args.f or None
|
||||
if args.f:
|
||||
print("Warning: -f flag is depreciated and will be removed in next version")
|
||||
|
|
|
@ -105,8 +105,8 @@ def _extract_Cover_from_archive(original_file_extension, tmp_file_name, rarExecu
|
|||
if extension in COVER_EXTENSIONS:
|
||||
cover_data = cf.read(name)
|
||||
break
|
||||
except Exception as e:
|
||||
log.debug('Rarfile failed with error: %s', e)
|
||||
except Exception as ex:
|
||||
log.debug('Rarfile failed with error: %s', ex)
|
||||
return cover_data
|
||||
|
||||
|
||||
|
|
|
@ -20,16 +20,18 @@
|
|||
from __future__ import division, print_function, unicode_literals
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
|
||||
from sqlalchemy import exc, Column, String, Integer, SmallInteger, Boolean, BLOB, JSON
|
||||
from sqlalchemy import Column, String, Integer, SmallInteger, Boolean, BLOB, JSON
|
||||
from sqlalchemy.exc import OperationalError
|
||||
from sqlalchemy.sql.expression import text
|
||||
try:
|
||||
# Compatibility with sqlalchemy 2.0
|
||||
from sqlalchemy.orm import declarative_base
|
||||
except ImportError:
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
|
||||
from . import constants, cli, logger, ub
|
||||
from . import constants, cli, logger
|
||||
|
||||
|
||||
log = logger.create()
|
||||
|
@ -39,7 +41,7 @@ class _Flask_Settings(_Base):
|
|||
__tablename__ = 'flask_settings'
|
||||
|
||||
id = Column(Integer, primary_key=True)
|
||||
flask_session_key = Column(BLOB, default="")
|
||||
flask_session_key = Column(BLOB, default=b"")
|
||||
|
||||
def __init__(self, key):
|
||||
self.flask_session_key = key
|
||||
|
@ -58,6 +60,8 @@ class _Settings(_Base):
|
|||
mail_password = Column(String, default='mypassword')
|
||||
mail_from = Column(String, default='automailer <mail@example.com>')
|
||||
mail_size = Column(Integer, default=25*1024*1024)
|
||||
mail_server_type = Column(SmallInteger, default=0)
|
||||
mail_gmail_token = Column(JSON, default={})
|
||||
|
||||
config_calibre_dir = Column(String)
|
||||
config_port = Column(Integer, default=constants.DEFAULT_PORT)
|
||||
|
@ -129,6 +133,7 @@ class _Settings(_Base):
|
|||
config_calibre = Column(String)
|
||||
config_rarfile_location = Column(String, default=None)
|
||||
config_upload_formats = Column(String, default=','.join(constants.EXTENSIONS_UPLOAD))
|
||||
config_unicode_filename =Column(Boolean, default=False)
|
||||
|
||||
config_updatechannel = Column(Integer, default=constants.UPDATE_STABLE)
|
||||
|
||||
|
@ -188,7 +193,7 @@ class _ConfigSQL(object):
|
|||
|
||||
@staticmethod
|
||||
def get_config_ipaddress():
|
||||
return cli.ipadress or ""
|
||||
return cli.ip_address or ""
|
||||
|
||||
def _has_role(self, role_flag):
|
||||
return constants.has_flag(self.config_default_role, role_flag)
|
||||
|
@ -246,18 +251,18 @@ class _ConfigSQL(object):
|
|||
return {k:v for k, v in self.__dict__.items() if k.startswith('mail_')}
|
||||
|
||||
def get_mail_server_configured(self):
|
||||
return not bool(self.mail_server == constants.DEFAULT_MAIL_SERVER)
|
||||
return bool((self.mail_server != constants.DEFAULT_MAIL_SERVER and self.mail_server_type == 0)
|
||||
or (self.mail_gmail_token != {} and self.mail_server_type == 1))
|
||||
|
||||
|
||||
def set_from_dictionary(self, dictionary, field, convertor=None, default=None, encode=None):
|
||||
'''Possibly updates a field of this object.
|
||||
"""Possibly updates a field of this object.
|
||||
The new value, if present, is grabbed from the given dictionary, and optionally passed through a convertor.
|
||||
|
||||
:returns: `True` if the field has changed value
|
||||
'''
|
||||
"""
|
||||
new_value = dictionary.get(field, default)
|
||||
if new_value is None:
|
||||
# log.debug("_ConfigSQL set_from_dictionary field '%s' not found", field)
|
||||
return False
|
||||
|
||||
if field not in self.__dict__:
|
||||
|
@ -274,7 +279,6 @@ class _ConfigSQL(object):
|
|||
if current_value == new_value:
|
||||
return False
|
||||
|
||||
# log.debug("_ConfigSQL set_from_dictionary '%s' = %r (was %r)", field, new_value, current_value)
|
||||
setattr(self, field, new_value)
|
||||
return True
|
||||
|
||||
|
@ -305,8 +309,11 @@ class _ConfigSQL(object):
|
|||
have_metadata_db = os.path.isfile(db_file)
|
||||
self.db_configured = have_metadata_db
|
||||
constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip().lower() for x in self.config_upload_formats.split(',')]
|
||||
# pylint: disable=access-member-before-definition
|
||||
logfile = logger.setup(self.config_logfile, self.config_log_level)
|
||||
if os.environ.get('FLASK_DEBUG'):
|
||||
logfile = logger.setup(logger.LOG_TO_STDOUT, logger.logging.DEBUG)
|
||||
else:
|
||||
# pylint: disable=access-member-before-definition
|
||||
logfile = logger.setup(self.config_logfile, self.config_log_level)
|
||||
if logfile != self.config_logfile:
|
||||
log.warning("Log path %s not valid, falling back to default", self.config_logfile)
|
||||
self.config_logfile = logfile
|
||||
|
@ -341,7 +348,7 @@ class _ConfigSQL(object):
|
|||
log.error(error)
|
||||
log.warning("invalidating configuration")
|
||||
self.db_configured = False
|
||||
self.config_calibre_dir = None
|
||||
# self.config_calibre_dir = None
|
||||
self.save()
|
||||
|
||||
|
||||
|
@ -352,7 +359,7 @@ def _migrate_table(session, orm_class):
|
|||
if column_name[0] != '_':
|
||||
try:
|
||||
session.query(column).first()
|
||||
except exc.OperationalError as err:
|
||||
except OperationalError as err:
|
||||
log.debug("%s: %s", column_name, err.args[0])
|
||||
if column.default is not None:
|
||||
if sys.version_info < (3, 0):
|
||||
|
@ -362,16 +369,23 @@ def _migrate_table(session, orm_class):
|
|||
column_default = ""
|
||||
else:
|
||||
if isinstance(column.default.arg, bool):
|
||||
column_default = ("DEFAULT %r" % int(column.default.arg))
|
||||
column_default = "DEFAULT {}".format(int(column.default.arg))
|
||||
else:
|
||||
column_default = ("DEFAULT %r" % column.default.arg)
|
||||
alter_table = "ALTER TABLE %s ADD COLUMN `%s` %s %s" % (orm_class.__tablename__,
|
||||
column_default = "DEFAULT `{}`".format(column.default.arg)
|
||||
if isinstance(column.type, JSON):
|
||||
column_type = "JSON"
|
||||
else:
|
||||
column_type = column.type
|
||||
alter_table = text("ALTER TABLE %s ADD COLUMN `%s` %s %s" % (orm_class.__tablename__,
|
||||
column_name,
|
||||
column.type,
|
||||
column_default)
|
||||
column_type,
|
||||
column_default))
|
||||
log.debug(alter_table)
|
||||
session.execute(alter_table)
|
||||
changed = True
|
||||
except json.decoder.JSONDecodeError as e:
|
||||
log.error("Database corrupt column: {}".format(column_name))
|
||||
log.debug(e)
|
||||
|
||||
if changed:
|
||||
try:
|
||||
|
@ -430,12 +444,12 @@ def load_configuration(session):
|
|||
session.commit()
|
||||
conf = _ConfigSQL(session)
|
||||
# Migrate from global restrictions to user based restrictions
|
||||
if bool(conf.config_default_show & constants.MATURE_CONTENT) and conf.config_denied_tags == "":
|
||||
conf.config_denied_tags = conf.config_mature_content_tags
|
||||
conf.save()
|
||||
session.query(ub.User).filter(ub.User.mature_content != True). \
|
||||
update({"denied_tags": conf.config_mature_content_tags}, synchronize_session=False)
|
||||
session.commit()
|
||||
#if bool(conf.config_default_show & constants.MATURE_CONTENT) and conf.config_denied_tags == "":
|
||||
# conf.config_denied_tags = conf.config_mature_content_tags
|
||||
# conf.save()
|
||||
# session.query(ub.User).filter(ub.User.mature_content != True). \
|
||||
# update({"denied_tags": conf.config_mature_content_tags}, synchronize_session=False)
|
||||
# session.commit()
|
||||
return conf
|
||||
|
||||
def get_flask_session_key(session):
|
||||
|
|
|
@ -20,6 +20,9 @@ from __future__ import division, print_function, unicode_literals
|
|||
import sys
|
||||
import os
|
||||
from collections import namedtuple
|
||||
from sqlalchemy import __version__ as sql_version
|
||||
|
||||
sqlalchemy_version2 = ([int(x) for x in sql_version.split('.')] >= [2,0,0])
|
||||
|
||||
# if installed via pip this variable is set to true (empty file with name .HOMEDIR present)
|
||||
HOME_CONFIG = os.path.isfile(os.path.join(os.path.dirname(os.path.abspath(__file__)), '.HOMEDIR'))
|
||||
|
@ -155,7 +158,7 @@ def selected_roles(dictionary):
|
|||
BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, description, tags, series, '
|
||||
'series_id, languages, publisher')
|
||||
|
||||
STABLE_VERSION = {'version': '0.6.12 Beta'}
|
||||
STABLE_VERSION = {'version': '0.6.13 Beta'}
|
||||
|
||||
NIGHTLY_VERSION = {}
|
||||
NIGHTLY_VERSION[0] = '$Format:%H$'
|
||||
|
|
|
@ -39,9 +39,9 @@ def _get_command_version(path, pattern, argument=None):
|
|||
if argument:
|
||||
command.append(argument)
|
||||
try:
|
||||
for line in process_wait(command):
|
||||
if re.search(pattern, line):
|
||||
return line
|
||||
match = process_wait(command, pattern=pattern)
|
||||
if isinstance(match, re.Match):
|
||||
return match.string
|
||||
except Exception as ex:
|
||||
log.warning("%s: %s", path, ex)
|
||||
return _EXECUTION_ERROR
|
||||
|
|
284
cps/db.py
284
cps/db.py
|
@ -31,6 +31,7 @@ from sqlalchemy import String, Integer, Boolean, TIMESTAMP, Float
|
|||
from sqlalchemy.orm import relationship, sessionmaker, scoped_session
|
||||
from sqlalchemy.orm.collections import InstrumentedList
|
||||
from sqlalchemy.ext.declarative import DeclarativeMeta
|
||||
from sqlalchemy.exc import OperationalError
|
||||
try:
|
||||
# Compatibility with sqlalchemy 2.0
|
||||
from sqlalchemy.orm import declarative_base
|
||||
|
@ -43,6 +44,7 @@ from flask_login import current_user
|
|||
from babel import Locale as LC
|
||||
from babel.core import UnknownLocaleError
|
||||
from flask_babel import gettext as _
|
||||
from flask import flash
|
||||
|
||||
from . import logger, ub, isoLanguages
|
||||
from .pagination import Pagination
|
||||
|
@ -57,7 +59,7 @@ except ImportError:
|
|||
|
||||
log = logger.create()
|
||||
|
||||
cc_exceptions = ['datetime', 'comments', 'composite', 'series']
|
||||
cc_exceptions = ['composite', 'series']
|
||||
cc_classes = {}
|
||||
|
||||
Base = declarative_base()
|
||||
|
@ -120,6 +122,8 @@ class Identifiers(Base):
|
|||
return u"Douban"
|
||||
elif format_type == "goodreads":
|
||||
return u"Goodreads"
|
||||
elif format_type == "babelio":
|
||||
return u"Babelio"
|
||||
elif format_type == "google":
|
||||
return u"Google Books"
|
||||
elif format_type == "kobo":
|
||||
|
@ -147,6 +151,8 @@ class Identifiers(Base):
|
|||
return u"https://dx.doi.org/{0}".format(self.val)
|
||||
elif format_type == "goodreads":
|
||||
return u"https://www.goodreads.com/book/show/{0}".format(self.val)
|
||||
elif format_type == "babelio":
|
||||
return u"https://www.babelio.com/livres/titre/{0}".format(self.val)
|
||||
elif format_type == "douban":
|
||||
return u"https://book.douban.com/subject/{0}".format(self.val)
|
||||
elif format_type == "google":
|
||||
|
@ -331,7 +337,6 @@ class Books(Base):
|
|||
has_cover = Column(Integer, default=0)
|
||||
uuid = Column(String)
|
||||
isbn = Column(String(collation='NOCASE'), default="")
|
||||
# Iccn = Column(String(collation='NOCASE'), default="")
|
||||
flags = Column(Integer, nullable=False, default=1)
|
||||
|
||||
authors = relationship('Authors', secondary=books_authors_link, backref='books')
|
||||
|
@ -393,7 +398,7 @@ class AlchemyEncoder(json.JSONEncoder):
|
|||
if isinstance(o.__class__, DeclarativeMeta):
|
||||
# an SQLAlchemy class
|
||||
fields = {}
|
||||
for field in [x for x in dir(o) if not x.startswith('_') and x != 'metadata']:
|
||||
for field in [x for x in dir(o) if not x.startswith('_') and x != 'metadata' and x!="password"]:
|
||||
if field == 'books':
|
||||
continue
|
||||
data = o.__getattribute__(field)
|
||||
|
@ -442,26 +447,121 @@ class CalibreDB():
|
|||
|
||||
self.instances.add(self)
|
||||
|
||||
|
||||
def initSession(self, expire_on_commit=True):
|
||||
self.session = self.session_factory()
|
||||
self.session.expire_on_commit = expire_on_commit
|
||||
self.update_title_sort(self.config)
|
||||
|
||||
@classmethod
|
||||
def setup_db(cls, config, app_db_path):
|
||||
def setup_db_cc_classes(self, cc):
|
||||
cc_ids = []
|
||||
books_custom_column_links = {}
|
||||
for row in cc:
|
||||
if row.datatype not in cc_exceptions:
|
||||
if row.datatype == 'series':
|
||||
dicttable = {'__tablename__': 'books_custom_column_' + str(row.id) + '_link',
|
||||
'id': Column(Integer, primary_key=True),
|
||||
'book': Column(Integer, ForeignKey('books.id'),
|
||||
primary_key=True),
|
||||
'map_value': Column('value', Integer,
|
||||
ForeignKey('custom_column_' +
|
||||
str(row.id) + '.id'),
|
||||
primary_key=True),
|
||||
'extra': Column(Float),
|
||||
'asoc': relationship('custom_column_' + str(row.id), uselist=False),
|
||||
'value': association_proxy('asoc', 'value')
|
||||
}
|
||||
books_custom_column_links[row.id] = type(str('books_custom_column_' + str(row.id) + '_link'),
|
||||
(Base,), dicttable)
|
||||
if row.datatype in ['rating', 'text', 'enumeration']:
|
||||
books_custom_column_links[row.id] = Table('books_custom_column_' + str(row.id) + '_link',
|
||||
Base.metadata,
|
||||
Column('book', Integer, ForeignKey('books.id'),
|
||||
primary_key=True),
|
||||
Column('value', Integer,
|
||||
ForeignKey('custom_column_' +
|
||||
str(row.id) + '.id'),
|
||||
primary_key=True)
|
||||
)
|
||||
cc_ids.append([row.id, row.datatype])
|
||||
|
||||
ccdict = {'__tablename__': 'custom_column_' + str(row.id),
|
||||
'id': Column(Integer, primary_key=True)}
|
||||
if row.datatype == 'float':
|
||||
ccdict['value'] = Column(Float)
|
||||
elif row.datatype == 'int':
|
||||
ccdict['value'] = Column(Integer)
|
||||
elif row.datatype == 'datetime':
|
||||
ccdict['value'] = Column(TIMESTAMP)
|
||||
elif row.datatype == 'bool':
|
||||
ccdict['value'] = Column(Boolean)
|
||||
else:
|
||||
ccdict['value'] = Column(String)
|
||||
if row.datatype in ['float', 'int', 'bool', 'datetime', 'comments']:
|
||||
ccdict['book'] = Column(Integer, ForeignKey('books.id'))
|
||||
cc_classes[row.id] = type(str('custom_column_' + str(row.id)), (Base,), ccdict)
|
||||
|
||||
for cc_id in cc_ids:
|
||||
if cc_id[1] in ['bool', 'int', 'float', 'datetime', 'comments']:
|
||||
setattr(Books,
|
||||
'custom_column_' + str(cc_id[0]),
|
||||
relationship(cc_classes[cc_id[0]],
|
||||
primaryjoin=(
|
||||
Books.id == cc_classes[cc_id[0]].book),
|
||||
backref='books'))
|
||||
elif cc_id[1] == 'series':
|
||||
setattr(Books,
|
||||
'custom_column_' + str(cc_id[0]),
|
||||
relationship(books_custom_column_links[cc_id[0]],
|
||||
backref='books'))
|
||||
else:
|
||||
setattr(Books,
|
||||
'custom_column_' + str(cc_id[0]),
|
||||
relationship(cc_classes[cc_id[0]],
|
||||
secondary=books_custom_column_links[cc_id[0]],
|
||||
backref='books'))
|
||||
|
||||
return cc_classes
|
||||
|
||||
@classmethod
|
||||
def check_valid_db(cls, config_calibre_dir, app_db_path):
|
||||
if not config_calibre_dir:
|
||||
return False
|
||||
dbpath = os.path.join(config_calibre_dir, "metadata.db")
|
||||
if not os.path.exists(dbpath):
|
||||
return False
|
||||
try:
|
||||
check_engine = create_engine('sqlite://',
|
||||
echo=False,
|
||||
isolation_level="SERIALIZABLE",
|
||||
connect_args={'check_same_thread': False},
|
||||
poolclass=StaticPool)
|
||||
with check_engine.begin() as connection:
|
||||
connection.execute(text("attach database '{}' as calibre;".format(dbpath)))
|
||||
connection.execute(text("attach database '{}' as app_settings;".format(app_db_path)))
|
||||
check_engine.connect()
|
||||
except Exception:
|
||||
return False
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def update_config(cls, config):
|
||||
cls.config = config
|
||||
|
||||
@classmethod
|
||||
def setup_db(cls, config_calibre_dir, app_db_path):
|
||||
# cls.config = config
|
||||
cls.dispose()
|
||||
|
||||
# toDo: if db changed -> delete shelfs, delete download books, delete read boks, kobo sync??
|
||||
|
||||
if not config.config_calibre_dir:
|
||||
config.invalidate()
|
||||
if not config_calibre_dir:
|
||||
cls.config.invalidate()
|
||||
return False
|
||||
|
||||
dbpath = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||
dbpath = os.path.join(config_calibre_dir, "metadata.db")
|
||||
if not os.path.exists(dbpath):
|
||||
config.invalidate()
|
||||
cls.config.invalidate()
|
||||
return False
|
||||
|
||||
try:
|
||||
|
@ -476,79 +576,18 @@ class CalibreDB():
|
|||
|
||||
conn = cls.engine.connect()
|
||||
# conn.text_factory = lambda b: b.decode(errors = 'ignore') possible fix for #1302
|
||||
except Exception as e:
|
||||
config.invalidate(e)
|
||||
except Exception as ex:
|
||||
cls.config.invalidate(ex)
|
||||
return False
|
||||
|
||||
config.db_configured = True
|
||||
cls.config.db_configured = True
|
||||
|
||||
if not cc_classes:
|
||||
cc = conn.execute(text("SELECT id, datatype FROM custom_columns"))
|
||||
|
||||
cc_ids = []
|
||||
books_custom_column_links = {}
|
||||
for row in cc:
|
||||
if row.datatype not in cc_exceptions:
|
||||
if row.datatype == 'series':
|
||||
dicttable = {'__tablename__': 'books_custom_column_' + str(row.id) + '_link',
|
||||
'id': Column(Integer, primary_key=True),
|
||||
'book': Column(Integer, ForeignKey('books.id'),
|
||||
primary_key=True),
|
||||
'map_value': Column('value', Integer,
|
||||
ForeignKey('custom_column_' +
|
||||
str(row.id) + '.id'),
|
||||
primary_key=True),
|
||||
'extra': Column(Float),
|
||||
'asoc': relationship('custom_column_' + str(row.id), uselist=False),
|
||||
'value': association_proxy('asoc', 'value')
|
||||
}
|
||||
books_custom_column_links[row.id] = type(str('books_custom_column_' + str(row.id) + '_link'),
|
||||
(Base,), dicttable)
|
||||
else:
|
||||
books_custom_column_links[row.id] = Table('books_custom_column_' + str(row.id) + '_link',
|
||||
Base.metadata,
|
||||
Column('book', Integer, ForeignKey('books.id'),
|
||||
primary_key=True),
|
||||
Column('value', Integer,
|
||||
ForeignKey('custom_column_' +
|
||||
str(row.id) + '.id'),
|
||||
primary_key=True)
|
||||
)
|
||||
cc_ids.append([row.id, row.datatype])
|
||||
|
||||
ccdict = {'__tablename__': 'custom_column_' + str(row.id),
|
||||
'id': Column(Integer, primary_key=True)}
|
||||
if row.datatype == 'float':
|
||||
ccdict['value'] = Column(Float)
|
||||
elif row.datatype == 'int':
|
||||
ccdict['value'] = Column(Integer)
|
||||
elif row.datatype == 'bool':
|
||||
ccdict['value'] = Column(Boolean)
|
||||
else:
|
||||
ccdict['value'] = Column(String)
|
||||
if row.datatype in ['float', 'int', 'bool']:
|
||||
ccdict['book'] = Column(Integer, ForeignKey('books.id'))
|
||||
cc_classes[row.id] = type(str('custom_column_' + str(row.id)), (Base,), ccdict)
|
||||
|
||||
for cc_id in cc_ids:
|
||||
if (cc_id[1] == 'bool') or (cc_id[1] == 'int') or (cc_id[1] == 'float'):
|
||||
setattr(Books,
|
||||
'custom_column_' + str(cc_id[0]),
|
||||
relationship(cc_classes[cc_id[0]],
|
||||
primaryjoin=(
|
||||
Books.id == cc_classes[cc_id[0]].book),
|
||||
backref='books'))
|
||||
elif (cc_id[1] == 'series'):
|
||||
setattr(Books,
|
||||
'custom_column_' + str(cc_id[0]),
|
||||
relationship(books_custom_column_links[cc_id[0]],
|
||||
backref='books'))
|
||||
else:
|
||||
setattr(Books,
|
||||
'custom_column_' + str(cc_id[0]),
|
||||
relationship(cc_classes[cc_id[0]],
|
||||
secondary=books_custom_column_links[cc_id[0]],
|
||||
backref='books'))
|
||||
try:
|
||||
cc = conn.execute(text("SELECT id, datatype FROM custom_columns"))
|
||||
cls.setup_db_cc_classes(cc)
|
||||
except OperationalError as e:
|
||||
log.debug_or_exception(e)
|
||||
|
||||
cls.session_factory = scoped_session(sessionmaker(autocommit=False,
|
||||
autoflush=True,
|
||||
|
@ -595,20 +634,46 @@ class CalibreDB():
|
|||
neg_content_tags_filter = false() if negtags_list == [''] else Books.tags.any(Tags.name.in_(negtags_list))
|
||||
pos_content_tags_filter = true() if postags_list == [''] else Books.tags.any(Tags.name.in_(postags_list))
|
||||
if self.config.config_restricted_column:
|
||||
pos_cc_list = current_user.allowed_column_value.split(',')
|
||||
pos_content_cc_filter = true() if pos_cc_list == [''] else \
|
||||
getattr(Books, 'custom_column_' + str(self.config.config_restricted_column)). \
|
||||
any(cc_classes[self.config.config_restricted_column].value.in_(pos_cc_list))
|
||||
neg_cc_list = current_user.denied_column_value.split(',')
|
||||
neg_content_cc_filter = false() if neg_cc_list == [''] else \
|
||||
getattr(Books, 'custom_column_' + str(self.config.config_restricted_column)). \
|
||||
any(cc_classes[self.config.config_restricted_column].value.in_(neg_cc_list))
|
||||
try:
|
||||
pos_cc_list = current_user.allowed_column_value.split(',')
|
||||
pos_content_cc_filter = true() if pos_cc_list == [''] else \
|
||||
getattr(Books, 'custom_column_' + str(self.config.config_restricted_column)). \
|
||||
any(cc_classes[self.config.config_restricted_column].value.in_(pos_cc_list))
|
||||
neg_cc_list = current_user.denied_column_value.split(',')
|
||||
neg_content_cc_filter = false() if neg_cc_list == [''] else \
|
||||
getattr(Books, 'custom_column_' + str(self.config.config_restricted_column)). \
|
||||
any(cc_classes[self.config.config_restricted_column].value.in_(neg_cc_list))
|
||||
except (KeyError, AttributeError):
|
||||
pos_content_cc_filter = false()
|
||||
neg_content_cc_filter = true()
|
||||
log.error(u"Custom Column No.%d is not existing in calibre database",
|
||||
self.config.config_restricted_column)
|
||||
flash(_("Custom Column No.%(column)d is not existing in calibre database",
|
||||
column=self.config.config_restricted_column),
|
||||
category="error")
|
||||
|
||||
else:
|
||||
pos_content_cc_filter = true()
|
||||
neg_content_cc_filter = false()
|
||||
return and_(lang_filter, pos_content_tags_filter, ~neg_content_tags_filter,
|
||||
pos_content_cc_filter, ~neg_content_cc_filter, archived_filter)
|
||||
|
||||
@staticmethod
|
||||
def get_checkbox_sorted(inputlist, state, offset, limit, order):
|
||||
outcome = list()
|
||||
elementlist = {ele.id: ele for ele in inputlist}
|
||||
for entry in state:
|
||||
try:
|
||||
outcome.append(elementlist[entry])
|
||||
except KeyError:
|
||||
pass
|
||||
del elementlist[entry]
|
||||
for entry in elementlist:
|
||||
outcome.append(elementlist[entry])
|
||||
if order == "asc":
|
||||
outcome.reverse()
|
||||
return outcome[offset:offset + limit]
|
||||
|
||||
# Fill indexpage with all requested data from database
|
||||
def fill_indexpage(self, page, pagesize, database, db_filter, order, *join):
|
||||
return self.fill_indexpage_with_archived_books(page, pagesize, database, db_filter, order, False, *join)
|
||||
|
@ -626,10 +691,18 @@ class CalibreDB():
|
|||
randm = false()
|
||||
off = int(int(pagesize) * (page - 1))
|
||||
query = self.session.query(database)
|
||||
if len(join) == 6:
|
||||
query = query.outerjoin(join[0], join[1]).outerjoin(join[2]).outerjoin(join[3], join[4]).outerjoin(join[5])
|
||||
if len(join) == 5:
|
||||
query = query.outerjoin(join[0], join[1]).outerjoin(join[2]).outerjoin(join[3], join[4])
|
||||
if len(join) == 4:
|
||||
query = query.outerjoin(join[0], join[1]).outerjoin(join[2]).outerjoin(join[3])
|
||||
if len(join) == 3:
|
||||
query = query.join(join[0], join[1]).join(join[2], isouter=True)
|
||||
query = query.outerjoin(join[0], join[1]).outerjoin(join[2])
|
||||
elif len(join) == 2:
|
||||
query = query.join(join[0], join[1], isouter=True)
|
||||
query = query.outerjoin(join[0], join[1])
|
||||
elif len(join) == 1:
|
||||
query = query.outerjoin(join[0])
|
||||
query = query.filter(db_filter)\
|
||||
.filter(self.common_filters(allow_show_archived))
|
||||
entries = list()
|
||||
|
@ -638,8 +711,8 @@ class CalibreDB():
|
|||
pagination = Pagination(page, pagesize,
|
||||
len(query.all()))
|
||||
entries = query.order_by(*order).offset(off).limit(pagesize).all()
|
||||
except Exception as e:
|
||||
log.debug_or_exception(e)
|
||||
except Exception as ex:
|
||||
log.debug_or_exception(ex)
|
||||
#for book in entries:
|
||||
# book = self.order_authors(book)
|
||||
return entries, randm, pagination
|
||||
|
@ -681,23 +754,35 @@ class CalibreDB():
|
|||
return self.session.query(Books) \
|
||||
.filter(and_(Books.authors.any(and_(*q)), func.lower(Books.title).ilike("%" + title + "%"))).first()
|
||||
|
||||
# read search results from calibre-database and return it (function is used for feed and simple search
|
||||
def get_search_results(self, term, offset=None, order=None, limit=None):
|
||||
order = order or [Books.sort]
|
||||
pagination = None
|
||||
def search_query(self, term, *join):
|
||||
term.strip().lower()
|
||||
self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
q = list()
|
||||
authorterms = re.split("[, ]+", term)
|
||||
for authorterm in authorterms:
|
||||
q.append(Books.authors.any(func.lower(Authors.name).ilike("%" + authorterm + "%")))
|
||||
result = self.session.query(Books).filter(self.common_filters(True)).filter(
|
||||
query = self.session.query(Books)
|
||||
if len(join) == 6:
|
||||
query = query.outerjoin(join[0], join[1]).outerjoin(join[2]).outerjoin(join[3], join[4]).outerjoin(join[5])
|
||||
if len(join) == 3:
|
||||
query = query.outerjoin(join[0], join[1]).outerjoin(join[2])
|
||||
elif len(join) == 2:
|
||||
query = query.outerjoin(join[0], join[1])
|
||||
elif len(join) == 1:
|
||||
query = query.outerjoin(join[0])
|
||||
return query.filter(self.common_filters(True)).filter(
|
||||
or_(Books.tags.any(func.lower(Tags.name).ilike("%" + term + "%")),
|
||||
Books.series.any(func.lower(Series.name).ilike("%" + term + "%")),
|
||||
Books.authors.any(and_(*q)),
|
||||
Books.publishers.any(func.lower(Publishers.name).ilike("%" + term + "%")),
|
||||
func.lower(Books.title).ilike("%" + term + "%")
|
||||
)).order_by(*order).all()
|
||||
))
|
||||
|
||||
# read search results from calibre-database and return it (function is used for feed and simple search
|
||||
def get_search_results(self, term, offset=None, order=None, limit=None, *join):
|
||||
order = order or [Books.sort]
|
||||
pagination = None
|
||||
result = self.search_query(term, *join).order_by(*order).all()
|
||||
result_count = len(result)
|
||||
if offset != None and limit != None:
|
||||
offset = int(offset)
|
||||
|
@ -777,13 +862,14 @@ class CalibreDB():
|
|||
def reconnect_db(self, config, app_db_path):
|
||||
self.dispose()
|
||||
self.engine.dispose()
|
||||
self.setup_db(config, app_db_path)
|
||||
self.setup_db(config.config_calibre_dir, app_db_path)
|
||||
self.update_config(config)
|
||||
|
||||
|
||||
def lcase(s):
|
||||
try:
|
||||
return unidecode.unidecode(s.lower())
|
||||
except Exception as e:
|
||||
except Exception as ex:
|
||||
log = logger.create()
|
||||
log.debug_or_exception(e)
|
||||
log.debug_or_exception(ex)
|
||||
return s.lower()
|
||||
|
|
|
@ -22,14 +22,10 @@ import glob
|
|||
import zipfile
|
||||
import json
|
||||
from io import BytesIO
|
||||
try:
|
||||
from StringIO import StringIO
|
||||
except ImportError:
|
||||
from io import StringIO
|
||||
|
||||
import os
|
||||
|
||||
from flask import send_file
|
||||
from flask import send_file, __version__
|
||||
|
||||
from . import logger, config
|
||||
from .about import collect_stats
|
||||
|
@ -38,14 +34,20 @@ log = logger.create()
|
|||
|
||||
def assemble_logfiles(file_name):
|
||||
log_list = sorted(glob.glob(file_name + '*'), reverse=True)
|
||||
wfd = StringIO()
|
||||
wfd = BytesIO()
|
||||
for f in log_list:
|
||||
with open(f, 'r') as fd:
|
||||
with open(f, 'rb') as fd:
|
||||
shutil.copyfileobj(fd, wfd)
|
||||
wfd.seek(0)
|
||||
return send_file(wfd,
|
||||
as_attachment=True,
|
||||
attachment_filename=os.path.basename(file_name))
|
||||
if int(__version__.split('.')[0]) < 2:
|
||||
return send_file(wfd,
|
||||
as_attachment=True,
|
||||
attachment_filename=os.path.basename(file_name))
|
||||
else:
|
||||
return send_file(wfd,
|
||||
as_attachment=True,
|
||||
download_name=os.path.basename(file_name))
|
||||
|
||||
|
||||
def send_debug():
|
||||
file_list = glob.glob(logger.get_logfile(config.config_logfile) + '*')
|
||||
|
@ -60,6 +62,11 @@ def send_debug():
|
|||
for fp in file_list:
|
||||
zf.write(fp, os.path.basename(fp))
|
||||
memory_zip.seek(0)
|
||||
return send_file(memory_zip,
|
||||
as_attachment=True,
|
||||
attachment_filename="Calibre-Web-debug-pack.zip")
|
||||
if int(__version__.split('.')[0]) < 2:
|
||||
return send_file(memory_zip,
|
||||
as_attachment=True,
|
||||
attachment_filename="Calibre-Web-debug-pack.zip")
|
||||
else:
|
||||
return send_file(memory_zip,
|
||||
as_attachment=True,
|
||||
download_name="Calibre-Web-debug-pack.zip")
|
||||
|
|
567
cps/editbooks.py
567
cps/editbooks.py
|
@ -26,7 +26,22 @@ from datetime import datetime
|
|||
import json
|
||||
from shutil import copyfile
|
||||
from uuid import uuid4
|
||||
from markupsafe import escape
|
||||
try:
|
||||
from lxml.html.clean import clean_html
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
|
||||
# Improve this to check if scholarly is available in a global way, like other pythonic libraries
|
||||
try:
|
||||
from scholarly import scholarly
|
||||
have_scholar = True
|
||||
except ImportError:
|
||||
have_scholar = False
|
||||
|
||||
from babel import Locale as LC
|
||||
from babel.core import UnknownLocaleError
|
||||
from flask import Blueprint, request, flash, redirect, url_for, abort, Markup, Response
|
||||
from flask_babel import gettext as _
|
||||
from flask_login import current_user, login_required
|
||||
|
@ -46,6 +61,8 @@ except ImportError:
|
|||
pass # We're not using Python 3
|
||||
|
||||
|
||||
|
||||
|
||||
editbook = Blueprint('editbook', __name__)
|
||||
log = logger.create()
|
||||
|
||||
|
@ -68,17 +85,7 @@ def edit_required(f):
|
|||
|
||||
return inner
|
||||
|
||||
|
||||
# Modifies different Database objects, first check if elements have to be added to database, than check
|
||||
# if elements have to be deleted, because they are no longer used
|
||||
def modify_database_object(input_elements, db_book_object, db_object, db_session, db_type):
|
||||
# passing input_elements not as a list may lead to undesired results
|
||||
if not isinstance(input_elements, list):
|
||||
raise TypeError(str(input_elements) + " should be passed as a list")
|
||||
changed = False
|
||||
input_elements = [x for x in input_elements if x != '']
|
||||
# we have all input element (authors, series, tags) names now
|
||||
# 1. search for elements to remove
|
||||
def search_objects_remove(db_book_object, db_type, input_elements):
|
||||
del_elements = []
|
||||
for c_elements in db_book_object:
|
||||
found = False
|
||||
|
@ -96,7 +103,10 @@ def modify_database_object(input_elements, db_book_object, db_object, db_session
|
|||
# if the element was not found in the new list, add it to remove list
|
||||
if not found:
|
||||
del_elements.append(c_elements)
|
||||
# 2. search for elements that need to be added
|
||||
return del_elements
|
||||
|
||||
|
||||
def search_objects_add(db_book_object, db_type, input_elements):
|
||||
add_elements = []
|
||||
for inp_element in input_elements:
|
||||
found = False
|
||||
|
@ -112,64 +122,96 @@ def modify_database_object(input_elements, db_book_object, db_object, db_session
|
|||
break
|
||||
if not found:
|
||||
add_elements.append(inp_element)
|
||||
# if there are elements to remove, we remove them now
|
||||
return add_elements
|
||||
|
||||
|
||||
def remove_objects(db_book_object, db_session, del_elements):
|
||||
changed = False
|
||||
if len(del_elements) > 0:
|
||||
for del_element in del_elements:
|
||||
db_book_object.remove(del_element)
|
||||
changed = True
|
||||
if len(del_element.books) == 0:
|
||||
db_session.delete(del_element)
|
||||
return changed
|
||||
|
||||
def add_objects(db_book_object, db_object, db_session, db_type, add_elements):
|
||||
changed = False
|
||||
if db_type == 'languages':
|
||||
db_filter = db_object.lang_code
|
||||
elif db_type == 'custom':
|
||||
db_filter = db_object.value
|
||||
else:
|
||||
db_filter = db_object.name
|
||||
for add_element in add_elements:
|
||||
# check if a element with that name exists
|
||||
db_element = db_session.query(db_object).filter(db_filter == add_element).first()
|
||||
# if no element is found add it
|
||||
# if new_element is None:
|
||||
if db_type == 'author':
|
||||
new_element = db_object(add_element, helper.get_sorted_author(add_element.replace('|', ',')), "")
|
||||
elif db_type == 'series':
|
||||
new_element = db_object(add_element, add_element)
|
||||
elif db_type == 'custom':
|
||||
new_element = db_object(value=add_element)
|
||||
elif db_type == 'publisher':
|
||||
new_element = db_object(add_element, None)
|
||||
else: # db_type should be tag or language
|
||||
new_element = db_object(add_element)
|
||||
if db_element is None:
|
||||
changed = True
|
||||
db_session.add(new_element)
|
||||
db_book_object.append(new_element)
|
||||
else:
|
||||
db_element = create_objects_for_addition(db_element, add_element, db_type)
|
||||
changed = True
|
||||
# add element to book
|
||||
changed = True
|
||||
db_book_object.append(db_element)
|
||||
return changed
|
||||
|
||||
|
||||
def create_objects_for_addition(db_element, add_element, db_type):
|
||||
if db_type == 'custom':
|
||||
if db_element.value != add_element:
|
||||
db_element.value = add_element # ToDo: Before new_element, but this is not plausible
|
||||
elif db_type == 'languages':
|
||||
if db_element.lang_code != add_element:
|
||||
db_element.lang_code = add_element
|
||||
elif db_type == 'series':
|
||||
if db_element.name != add_element:
|
||||
db_element.name = add_element
|
||||
db_element.sort = add_element
|
||||
elif db_type == 'author':
|
||||
if db_element.name != add_element:
|
||||
db_element.name = add_element
|
||||
db_element.sort = add_element.replace('|', ',')
|
||||
elif db_type == 'publisher':
|
||||
if db_element.name != add_element:
|
||||
db_element.name = add_element
|
||||
db_element.sort = None
|
||||
elif db_element.name != add_element:
|
||||
db_element.name = add_element
|
||||
return db_element
|
||||
|
||||
|
||||
# Modifies different Database objects, first check if elements if elements have to be deleted,
|
||||
# because they are no longer used, than check if elements have to be added to database
|
||||
def modify_database_object(input_elements, db_book_object, db_object, db_session, db_type):
|
||||
# passing input_elements not as a list may lead to undesired results
|
||||
if not isinstance(input_elements, list):
|
||||
raise TypeError(str(input_elements) + " should be passed as a list")
|
||||
input_elements = [x for x in input_elements if x != '']
|
||||
# we have all input element (authors, series, tags) names now
|
||||
# 1. search for elements to remove
|
||||
del_elements = search_objects_remove(db_book_object, db_type, input_elements)
|
||||
# 2. search for elements that need to be added
|
||||
add_elements = search_objects_add(db_book_object, db_type, input_elements)
|
||||
# if there are elements to remove, we remove them now
|
||||
changed = remove_objects(db_book_object, db_session, del_elements)
|
||||
# if there are elements to add, we add them now!
|
||||
if len(add_elements) > 0:
|
||||
if db_type == 'languages':
|
||||
db_filter = db_object.lang_code
|
||||
elif db_type == 'custom':
|
||||
db_filter = db_object.value
|
||||
else:
|
||||
db_filter = db_object.name
|
||||
for add_element in add_elements:
|
||||
# check if a element with that name exists
|
||||
db_element = db_session.query(db_object).filter(db_filter == add_element).first()
|
||||
# if no element is found add it
|
||||
# if new_element is None:
|
||||
if db_type == 'author':
|
||||
new_element = db_object(add_element, helper.get_sorted_author(add_element.replace('|', ',')), "")
|
||||
elif db_type == 'series':
|
||||
new_element = db_object(add_element, add_element)
|
||||
elif db_type == 'custom':
|
||||
new_element = db_object(value=add_element)
|
||||
elif db_type == 'publisher':
|
||||
new_element = db_object(add_element, None)
|
||||
else: # db_type should be tag or language
|
||||
new_element = db_object(add_element)
|
||||
if db_element is None:
|
||||
changed = True
|
||||
db_session.add(new_element)
|
||||
db_book_object.append(new_element)
|
||||
else:
|
||||
if db_type == 'custom':
|
||||
if db_element.value != add_element:
|
||||
new_element.value = add_element
|
||||
elif db_type == 'languages':
|
||||
if db_element.lang_code != add_element:
|
||||
db_element.lang_code = add_element
|
||||
elif db_type == 'series':
|
||||
if db_element.name != add_element:
|
||||
db_element.name = add_element
|
||||
db_element.sort = add_element
|
||||
elif db_type == 'author':
|
||||
if db_element.name != add_element:
|
||||
db_element.name = add_element
|
||||
db_element.sort = add_element.replace('|', ',')
|
||||
elif db_type == 'publisher':
|
||||
if db_element.name != add_element:
|
||||
db_element.name = add_element
|
||||
db_element.sort = None
|
||||
elif db_element.name != add_element:
|
||||
db_element.name = add_element
|
||||
# add element to book
|
||||
changed = True
|
||||
db_book_object.append(db_element)
|
||||
changed |= add_objects(db_book_object, db_object, db_session, db_type, add_elements)
|
||||
return changed
|
||||
|
||||
|
||||
|
@ -202,14 +244,14 @@ def modify_identifiers(input_identifiers, db_identifiers, db_session):
|
|||
@editbook.route("/ajax/delete/<int:book_id>")
|
||||
@login_required
|
||||
def delete_book_from_details(book_id):
|
||||
return Response(delete_book(book_id,"", True), mimetype='application/json')
|
||||
return Response(delete_book(book_id, "", True), mimetype='application/json')
|
||||
|
||||
|
||||
@editbook.route("/delete/<int:book_id>", defaults={'book_format': ""})
|
||||
@editbook.route("/delete/<int:book_id>/<string:book_format>")
|
||||
@login_required
|
||||
def delete_book_ajax(book_id, book_format):
|
||||
return delete_book(book_id,book_format, False)
|
||||
return delete_book(book_id, book_format, False)
|
||||
|
||||
|
||||
def delete_whole_book(book_id, book):
|
||||
|
@ -288,19 +330,19 @@ def delete_book(book_id, book_format, jsonResponse):
|
|||
result, error = helper.delete_book(book, config.config_calibre_dir, book_format=book_format.upper())
|
||||
if not result:
|
||||
if jsonResponse:
|
||||
return json.dumps({"location": url_for("editbook.edit_book"),
|
||||
"type": "alert",
|
||||
return json.dumps([{"location": url_for("editbook.edit_book", book_id=book_id),
|
||||
"type": "danger",
|
||||
"format": "",
|
||||
"error": error}),
|
||||
"message": error}])
|
||||
else:
|
||||
flash(error, category="error")
|
||||
return redirect(url_for('editbook.edit_book', book_id=book_id))
|
||||
if error:
|
||||
if jsonResponse:
|
||||
warning = {"location": url_for("editbook.edit_book"),
|
||||
warning = {"location": url_for("editbook.edit_book", book_id=book_id),
|
||||
"type": "warning",
|
||||
"format": "",
|
||||
"error": error}
|
||||
"message": error}
|
||||
else:
|
||||
flash(error, category="warning")
|
||||
if not book_format:
|
||||
|
@ -309,9 +351,18 @@ def delete_book(book_id, book_format, jsonResponse):
|
|||
calibre_db.session.query(db.Data).filter(db.Data.book == book.id).\
|
||||
filter(db.Data.format == book_format).delete()
|
||||
calibre_db.session.commit()
|
||||
except Exception as e:
|
||||
log.debug_or_exception(e)
|
||||
except Exception as ex:
|
||||
log.debug_or_exception(ex)
|
||||
calibre_db.session.rollback()
|
||||
if jsonResponse:
|
||||
return json.dumps([{"location": url_for("editbook.edit_book", book_id=book_id),
|
||||
"type": "danger",
|
||||
"format": "",
|
||||
"message": ex}])
|
||||
else:
|
||||
flash(str(ex), category="error")
|
||||
return redirect(url_for('editbook.edit_book', book_id=book_id))
|
||||
|
||||
else:
|
||||
# book not found
|
||||
log.error('Book with id "%s" could not be deleted: not found', book_id)
|
||||
|
@ -322,7 +373,7 @@ def render_edit_book(book_id):
|
|||
cc = calibre_db.session.query(db.Custom_Columns).filter(db.Custom_Columns.datatype.notin_(db.cc_exceptions)).all()
|
||||
book = calibre_db.get_filtered_book(book_id, allow_show_archived=True)
|
||||
if not book:
|
||||
flash(_(u"Error opening eBook. File does not exist or file is not accessible"), category="error")
|
||||
flash(_(u"Oops! Selected book title is unavailable. File does not exist or is not accessible"), category="error")
|
||||
return redirect(url_for("web.index"))
|
||||
|
||||
for lang in book.languages:
|
||||
|
@ -403,6 +454,9 @@ def edit_book_series_index(series_index, book):
|
|||
# Add default series_index to book
|
||||
modif_date = False
|
||||
series_index = series_index or '1'
|
||||
if not series_index.replace('.', '', 1).isdigit():
|
||||
flash(_("%(seriesindex)s is not a valid number, skipping", seriesindex=series_index), category="warning")
|
||||
return False
|
||||
if book.series_index != series_index:
|
||||
book.series_index = series_index
|
||||
modif_date = True
|
||||
|
@ -411,6 +465,8 @@ def edit_book_series_index(series_index, book):
|
|||
# Handle book comments/description
|
||||
def edit_book_comments(comments, book):
|
||||
modif_date = False
|
||||
if comments:
|
||||
comments = clean_html(comments)
|
||||
if len(book.comments):
|
||||
if book.comments[0].text != comments:
|
||||
book.comments[0].text = comments
|
||||
|
@ -422,7 +478,7 @@ def edit_book_comments(comments, book):
|
|||
return modif_date
|
||||
|
||||
|
||||
def edit_book_languages(languages, book, upload=False):
|
||||
def edit_book_languages(languages, book, upload=False, invalid=None):
|
||||
input_languages = languages.split(',')
|
||||
unknown_languages = []
|
||||
if not upload:
|
||||
|
@ -431,7 +487,10 @@ def edit_book_languages(languages, book, upload=False):
|
|||
input_l = isoLanguages.get_valid_language_codes(get_locale(), input_languages, unknown_languages)
|
||||
for l in unknown_languages:
|
||||
log.error('%s is not a valid language', l)
|
||||
flash(_(u"%(langname)s is not a valid language", langname=l), category="warning")
|
||||
if isinstance(invalid, list):
|
||||
invalid.append(l)
|
||||
else:
|
||||
flash(_(u"%(langname)s is not a valid language", langname=l), category="warning")
|
||||
# ToDo: Not working correct
|
||||
if upload and len(input_l) == 1:
|
||||
# If the language of the file is excluded from the users view, it's not imported, to allow the user to view
|
||||
|
@ -456,12 +515,21 @@ def edit_book_publisher(publishers, book):
|
|||
return changed
|
||||
|
||||
|
||||
def edit_cc_data_number(book_id, book, c, to_save, cc_db_value, cc_string):
|
||||
def edit_cc_data_value(book_id, book, c, to_save, cc_db_value, cc_string):
|
||||
changed = False
|
||||
if to_save[cc_string] == 'None':
|
||||
to_save[cc_string] = None
|
||||
elif c.datatype == 'bool':
|
||||
to_save[cc_string] = 1 if to_save[cc_string] == 'True' else 0
|
||||
elif c.datatype == 'comments':
|
||||
to_save[cc_string] = Markup(to_save[cc_string]).unescape()
|
||||
if to_save[cc_string]:
|
||||
to_save[cc_string] = clean_html(to_save[cc_string])
|
||||
elif c.datatype == 'datetime':
|
||||
try:
|
||||
to_save[cc_string] = datetime.strptime(to_save[cc_string], "%Y-%m-%d")
|
||||
except ValueError:
|
||||
to_save[cc_string] = db.Books.DEFAULT_PUBDATE
|
||||
|
||||
if to_save[cc_string] != cc_db_value:
|
||||
if cc_db_value is not None:
|
||||
|
@ -520,8 +588,8 @@ def edit_cc_data(book_id, book, to_save):
|
|||
else:
|
||||
cc_db_value = None
|
||||
if to_save[cc_string].strip():
|
||||
if c.datatype == 'int' or c.datatype == 'bool' or c.datatype == 'float':
|
||||
changed, to_save = edit_cc_data_number(book_id, book, c, to_save, cc_db_value, cc_string)
|
||||
if c.datatype in ['int', 'bool', 'float', "datetime", "comments"]:
|
||||
changed, to_save = edit_cc_data_value(book_id, book, c, to_save, cc_db_value, cc_string)
|
||||
else:
|
||||
changed, to_save = edit_cc_data_string(book, c, to_save, cc_db_value, cc_string)
|
||||
else:
|
||||
|
@ -596,9 +664,9 @@ def upload_single_file(request, book, book_id):
|
|||
return redirect(url_for('web.show_book', book_id=book.id))
|
||||
|
||||
# Queue uploader info
|
||||
uploadText=_(u"File format %(ext)s added to %(book)s", ext=file_ext.upper(), book=book.title)
|
||||
WorkerThread.add(current_user.nickname, TaskUpload(
|
||||
"<a href=\"" + url_for('web.show_book', book_id=book.id) + "\">" + uploadText + "</a>"))
|
||||
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book.id), escape(book.title))
|
||||
uploadText=_(u"File format %(ext)s added to %(book)s", ext=file_ext.upper(), book=link)
|
||||
WorkerThread.add(current_user.name, TaskUpload(uploadText))
|
||||
|
||||
return uploader.process(
|
||||
saved_filename, *os.path.splitext(requested_file.filename),
|
||||
|
@ -622,6 +690,46 @@ def upload_cover(request, book):
|
|||
return None
|
||||
|
||||
|
||||
def handle_title_on_edit(book, book_title):
|
||||
# handle book title
|
||||
book_title = book_title.rstrip().strip()
|
||||
if book.title != book_title:
|
||||
if book_title == '':
|
||||
book_title = _(u'Unknown')
|
||||
book.title = book_title
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def handle_author_on_edit(book, author_name, update_stored=True):
|
||||
# handle author(s)
|
||||
input_authors = author_name.split('&')
|
||||
input_authors = list(map(lambda it: it.strip().replace(',', '|'), input_authors))
|
||||
# Remove duplicates in authors list
|
||||
input_authors = helper.uniq(input_authors)
|
||||
# we have all author names now
|
||||
if input_authors == ['']:
|
||||
input_authors = [_(u'Unknown')] # prevent empty Author
|
||||
|
||||
change = modify_database_object(input_authors, book.authors, db.Authors, calibre_db.session, 'author')
|
||||
|
||||
# Search for each author if author is in database, if not, author name and sorted author name is generated new
|
||||
# everything then is assembled for sorted author field in database
|
||||
sort_authors_list = list()
|
||||
for inp in input_authors:
|
||||
stored_author = calibre_db.session.query(db.Authors).filter(db.Authors.name == inp).first()
|
||||
if not stored_author:
|
||||
stored_author = helper.get_sorted_author(inp)
|
||||
else:
|
||||
stored_author = stored_author.sort
|
||||
sort_authors_list.append(helper.get_sorted_author(stored_author))
|
||||
sort_authors = ' & '.join(sort_authors_list)
|
||||
if book.author_sort != sort_authors and update_stored:
|
||||
book.author_sort = sort_authors
|
||||
change = True
|
||||
return input_authors, change
|
||||
|
||||
|
||||
@editbook.route("/admin/book/<int:book_id>", methods=['GET', 'POST'])
|
||||
@login_required_if_no_ano
|
||||
@edit_required
|
||||
|
@ -639,12 +747,11 @@ def edit_book(book_id):
|
|||
if request.method != 'POST':
|
||||
return render_edit_book(book_id)
|
||||
|
||||
|
||||
book = calibre_db.get_filtered_book(book_id, allow_show_archived=True)
|
||||
|
||||
# Book not found
|
||||
if not book:
|
||||
flash(_(u"Error opening eBook. File does not exist or file is not accessible"), category="error")
|
||||
flash(_(u"Oops! Selected book title is unavailable. File does not exist or is not accessible"), category="error")
|
||||
return redirect(url_for("web.index"))
|
||||
|
||||
meta = upload_single_file(request, book, book_id)
|
||||
|
@ -657,41 +764,14 @@ def edit_book(book_id):
|
|||
# Update book
|
||||
edited_books_id = None
|
||||
|
||||
#handle book title
|
||||
if book.title != to_save["book_title"].rstrip().strip():
|
||||
if to_save["book_title"] == '':
|
||||
to_save["book_title"] = _(u'Unknown')
|
||||
book.title = to_save["book_title"].rstrip().strip()
|
||||
# handle book title
|
||||
title_change = handle_title_on_edit(book, to_save["book_title"])
|
||||
|
||||
input_authors, authorchange = handle_author_on_edit(book, to_save["author_name"])
|
||||
if authorchange or title_change:
|
||||
edited_books_id = book.id
|
||||
modif_date = True
|
||||
|
||||
# handle author(s)
|
||||
input_authors = to_save["author_name"].split('&')
|
||||
input_authors = list(map(lambda it: it.strip().replace(',', '|'), input_authors))
|
||||
# Remove duplicates in authors list
|
||||
input_authors = helper.uniq(input_authors)
|
||||
# we have all author names now
|
||||
if input_authors == ['']:
|
||||
input_authors = [_(u'Unknown')] # prevent empty Author
|
||||
|
||||
modif_date |= modify_database_object(input_authors, book.authors, db.Authors, calibre_db.session, 'author')
|
||||
|
||||
# Search for each author if author is in database, if not, authorname and sorted authorname is generated new
|
||||
# everything then is assembled for sorted author field in database
|
||||
sort_authors_list = list()
|
||||
for inp in input_authors:
|
||||
stored_author = calibre_db.session.query(db.Authors).filter(db.Authors.name == inp).first()
|
||||
if not stored_author:
|
||||
stored_author = helper.get_sorted_author(inp)
|
||||
else:
|
||||
stored_author = stored_author.sort
|
||||
sort_authors_list.append(helper.get_sorted_author(stored_author))
|
||||
sort_authors = ' & '.join(sort_authors_list)
|
||||
if book.author_sort != sort_authors:
|
||||
edited_books_id = book.id
|
||||
book.author_sort = sort_authors
|
||||
modif_date = True
|
||||
|
||||
if config.config_use_google_drive:
|
||||
gdriveutils.updateGdriveCalibreFromLocal()
|
||||
|
||||
|
@ -717,10 +797,8 @@ def edit_book(book_id):
|
|||
|
||||
# Add default series_index to book
|
||||
modif_date |= edit_book_series_index(to_save["series_index"], book)
|
||||
|
||||
# Handle book comments/description
|
||||
modif_date |= edit_book_comments(to_save["description"], book)
|
||||
|
||||
modif_date |= edit_book_comments(Markup(to_save['description']).unescape(), book)
|
||||
# Handle identifiers
|
||||
input_identifiers = identifier_list(to_save, book)
|
||||
modification, warning = modify_identifiers(input_identifiers, book.identifiers, calibre_db.session)
|
||||
|
@ -729,9 +807,16 @@ def edit_book(book_id):
|
|||
modif_date |= modification
|
||||
# Handle book tags
|
||||
modif_date |= edit_book_tags(to_save['tags'], book)
|
||||
|
||||
# Handle book series
|
||||
modif_date |= edit_book_series(to_save["series"], book)
|
||||
# handle book publisher
|
||||
modif_date |= edit_book_publisher(to_save['publisher'], book)
|
||||
# handle book languages
|
||||
modif_date |= edit_book_languages(to_save['languages'], book)
|
||||
# handle book ratings
|
||||
modif_date |= edit_book_ratings(to_save, book)
|
||||
# handle cc data
|
||||
modif_date |= edit_cc_data(book_id, book, to_save)
|
||||
|
||||
if to_save["pubdate"]:
|
||||
try:
|
||||
|
@ -741,18 +826,6 @@ def edit_book(book_id):
|
|||
else:
|
||||
book.pubdate = db.Books.DEFAULT_PUBDATE
|
||||
|
||||
# handle book publisher
|
||||
modif_date |= edit_book_publisher(to_save['publisher'], book)
|
||||
|
||||
# handle book languages
|
||||
modif_date |= edit_book_languages(to_save['languages'], book)
|
||||
|
||||
# handle book ratings
|
||||
modif_date |= edit_book_ratings(to_save, book)
|
||||
|
||||
# handle cc data
|
||||
modif_date |= edit_cc_data(book_id, book, to_save)
|
||||
|
||||
if modif_date:
|
||||
book.last_modified = datetime.utcnow()
|
||||
calibre_db.session.merge(book)
|
||||
|
@ -768,8 +841,8 @@ def edit_book(book_id):
|
|||
calibre_db.session.rollback()
|
||||
flash(error, category="error")
|
||||
return render_edit_book(book_id)
|
||||
except Exception as e:
|
||||
log.debug_or_exception(e)
|
||||
except Exception as ex:
|
||||
log.debug_or_exception(ex)
|
||||
calibre_db.session.rollback()
|
||||
flash(_("Error editing book, please check logfile for details"), category="error")
|
||||
return redirect(url_for('web.show_book', book_id=book.id))
|
||||
|
@ -885,6 +958,48 @@ def create_book_on_upload(modif_date, meta):
|
|||
calibre_db.session.flush()
|
||||
return db_book, input_authors, title_dir
|
||||
|
||||
def file_handling_on_upload(requested_file):
|
||||
# check if file extension is correct
|
||||
if '.' in requested_file.filename:
|
||||
file_ext = requested_file.filename.rsplit('.', 1)[-1].lower()
|
||||
if file_ext not in constants.EXTENSIONS_UPLOAD and '' not in constants.EXTENSIONS_UPLOAD:
|
||||
flash(
|
||||
_("File extension '%(ext)s' is not allowed to be uploaded to this server",
|
||||
ext=file_ext), category="error")
|
||||
return None, Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json')
|
||||
else:
|
||||
flash(_('File to be uploaded must have an extension'), category="error")
|
||||
return None, Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json')
|
||||
|
||||
# extract metadata from file
|
||||
try:
|
||||
meta = uploader.upload(requested_file, config.config_rarfile_location)
|
||||
except (IOError, OSError):
|
||||
log.error("File %s could not saved to temp dir", requested_file.filename)
|
||||
flash(_(u"File %(filename)s could not saved to temp dir",
|
||||
filename=requested_file.filename), category="error")
|
||||
return None, Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json')
|
||||
return meta, None
|
||||
|
||||
|
||||
def move_coverfile(meta, db_book):
|
||||
# move cover to final directory, including book id
|
||||
if meta.cover:
|
||||
coverfile = meta.cover
|
||||
else:
|
||||
coverfile = os.path.join(constants.STATIC_DIR, 'generic_cover.jpg')
|
||||
new_coverpath = os.path.join(config.config_calibre_dir, db_book.path, "cover.jpg")
|
||||
try:
|
||||
copyfile(coverfile, new_coverpath)
|
||||
if meta.cover:
|
||||
os.unlink(meta.cover)
|
||||
except OSError as e:
|
||||
log.error("Failed to move cover file %s: %s", new_coverpath, e)
|
||||
flash(_(u"Failed to Move Cover File %(file)s: %(error)s", file=new_coverpath,
|
||||
error=e),
|
||||
category="error")
|
||||
|
||||
|
||||
@editbook.route("/upload", methods=["GET", "POST"])
|
||||
@login_required_if_no_ano
|
||||
@upload_required
|
||||
|
@ -899,30 +1014,13 @@ def upload():
|
|||
calibre_db.update_title_sort(config)
|
||||
calibre_db.session.connection().connection.connection.create_function('uuid4', 0, lambda: str(uuid4()))
|
||||
|
||||
# check if file extension is correct
|
||||
if '.' in requested_file.filename:
|
||||
file_ext = requested_file.filename.rsplit('.', 1)[-1].lower()
|
||||
if file_ext not in constants.EXTENSIONS_UPLOAD and '' not in constants.EXTENSIONS_UPLOAD:
|
||||
flash(
|
||||
_("File extension '%(ext)s' is not allowed to be uploaded to this server",
|
||||
ext=file_ext), category="error")
|
||||
return Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json')
|
||||
else:
|
||||
flash(_('File to be uploaded must have an extension'), category="error")
|
||||
return Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json')
|
||||
|
||||
# extract metadata from file
|
||||
try:
|
||||
meta = uploader.upload(requested_file, config.config_rarfile_location)
|
||||
except (IOError, OSError):
|
||||
log.error("File %s could not saved to temp dir", requested_file.filename)
|
||||
flash(_(u"File %(filename)s could not saved to temp dir",
|
||||
filename= requested_file.filename), category="error")
|
||||
return Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json')
|
||||
meta, error = file_handling_on_upload(requested_file)
|
||||
if error:
|
||||
return error
|
||||
|
||||
db_book, input_authors, title_dir = create_book_on_upload(modif_date, meta)
|
||||
|
||||
# Comments needs book id therfore only possible after flush
|
||||
# Comments needs book id therefore only possible after flush
|
||||
modif_date |= edit_book_comments(Markup(meta.description).unescape(), db_book)
|
||||
|
||||
book_id = db_book.id
|
||||
|
@ -932,23 +1030,9 @@ def upload():
|
|||
config.config_calibre_dir,
|
||||
input_authors[0],
|
||||
meta.file_path,
|
||||
title_dir + meta.extension)
|
||||
title_dir + meta.extension.lower())
|
||||
|
||||
# move cover to final directory, including book id
|
||||
if meta.cover:
|
||||
coverfile = meta.cover
|
||||
else:
|
||||
coverfile = os.path.join(constants.STATIC_DIR, 'generic_cover.jpg')
|
||||
new_coverpath = os.path.join(config.config_calibre_dir, db_book.path, "cover.jpg")
|
||||
try:
|
||||
copyfile(coverfile, new_coverpath)
|
||||
if meta.cover:
|
||||
os.unlink(meta.cover)
|
||||
except OSError as e:
|
||||
log.error("Failed to move cover file %s: %s", new_coverpath, e)
|
||||
flash(_(u"Failed to Move Cover File %(file)s: %(error)s", file=new_coverpath,
|
||||
error=e),
|
||||
category="error")
|
||||
move_coverfile(meta, db_book)
|
||||
|
||||
# save data to database, reread data
|
||||
calibre_db.session.commit()
|
||||
|
@ -957,9 +1041,9 @@ def upload():
|
|||
gdriveutils.updateGdriveCalibreFromLocal()
|
||||
if error:
|
||||
flash(error, category="error")
|
||||
uploadText=_(u"File %(file)s uploaded", file=title)
|
||||
WorkerThread.add(current_user.nickname, TaskUpload(
|
||||
"<a href=\"" + url_for('web.show_book', book_id=book_id) + "\">" + uploadText + "</a>"))
|
||||
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book_id), escape(title))
|
||||
uploadText = _(u"File %(file)s uploaded", file=link)
|
||||
WorkerThread.add(current_user.name, TaskUpload(uploadText))
|
||||
|
||||
if len(request.files.getlist("btn-upload")) < 2:
|
||||
if current_user.role_edit() or current_user.role_admin():
|
||||
|
@ -988,7 +1072,7 @@ def convert_bookformat(book_id):
|
|||
|
||||
log.info('converting: book id: %s from: %s to: %s', book_id, book_format_from, book_format_to)
|
||||
rtn = helper.convert_book_format(book_id, config.config_calibre_dir, book_format_from.upper(),
|
||||
book_format_to.upper(), current_user.nickname)
|
||||
book_format_to.upper(), current_user.name)
|
||||
|
||||
if rtn is None:
|
||||
flash(_(u"Book successfully queued for converting to %(book_format)s",
|
||||
|
@ -998,61 +1082,110 @@ def convert_bookformat(book_id):
|
|||
flash(_(u"There was an error converting this book: %(res)s", res=rtn), category="error")
|
||||
return redirect(url_for('editbook.edit_book', book_id=book_id))
|
||||
|
||||
@editbook.route("/scholarsearch/<query>",methods=['GET'])
|
||||
@login_required_if_no_ano
|
||||
@edit_required
|
||||
def scholar_search(query):
|
||||
if have_scholar:
|
||||
scholar_gen = scholarly.search_pubs(' '.join(query.split('+')))
|
||||
i=0
|
||||
result = []
|
||||
for publication in scholar_gen:
|
||||
del publication['source']
|
||||
result.append(publication)
|
||||
i+=1
|
||||
if(i>=10):
|
||||
break
|
||||
return Response(json.dumps(result),mimetype='application/json')
|
||||
else:
|
||||
return "[]"
|
||||
|
||||
@editbook.route("/ajax/editbooks/<param>", methods=['POST'])
|
||||
@login_required_if_no_ano
|
||||
@edit_required
|
||||
def edit_list_book(param):
|
||||
vals = request.form.to_dict()
|
||||
book = calibre_db.get_book(vals['pk'])
|
||||
ret = ""
|
||||
if param =='series_index':
|
||||
edit_book_series_index(vals['value'], book)
|
||||
ret = Response(json.dumps({'success': True, 'newValue': book.series_index}), mimetype='application/json')
|
||||
elif param =='tags':
|
||||
edit_book_tags(vals['value'], book)
|
||||
ret = Response(json.dumps({'success': True, 'newValue': ', '.join([tag.name for tag in book.tags])}),
|
||||
mimetype='application/json')
|
||||
elif param =='series':
|
||||
edit_book_series(vals['value'], book)
|
||||
ret = Response(json.dumps({'success': True, 'newValue': ', '.join([serie.name for serie in book.series])}),
|
||||
mimetype='application/json')
|
||||
elif param =='publishers':
|
||||
vals['publisher'] = vals['value']
|
||||
edit_book_publisher(vals, book)
|
||||
edit_book_publisher(vals['value'], book)
|
||||
ret = Response(json.dumps({'success': True,
|
||||
'newValue': ', '.join([publisher.name for publisher in book.publishers])}),
|
||||
mimetype='application/json')
|
||||
elif param =='languages':
|
||||
edit_book_languages(vals['value'], book)
|
||||
invalid = list()
|
||||
edit_book_languages(vals['value'], book, invalid=invalid)
|
||||
if invalid:
|
||||
ret = Response(json.dumps({'success': False,
|
||||
'msg': 'Invalid languages in request: {}'.format(','.join(invalid))}),
|
||||
mimetype='application/json')
|
||||
else:
|
||||
lang_names = list()
|
||||
for lang in book.languages:
|
||||
try:
|
||||
lang_names.append(LC.parse(lang.lang_code).get_language_name(get_locale()))
|
||||
except UnknownLocaleError:
|
||||
lang_names.append(_(isoLanguages.get(part3=lang.lang_code).name))
|
||||
ret = Response(json.dumps({'success': True, 'newValue': ', '.join(lang_names)}),
|
||||
mimetype='application/json')
|
||||
elif param =='author_sort':
|
||||
book.author_sort = vals['value']
|
||||
elif param =='title':
|
||||
book.title = vals['value']
|
||||
ret = Response(json.dumps({'success': True, 'newValue': book.author_sort}),
|
||||
mimetype='application/json')
|
||||
elif param == 'title':
|
||||
sort = book.sort
|
||||
handle_title_on_edit(book, vals.get('value', ""))
|
||||
helper.update_dir_stucture(book.id, config.config_calibre_dir)
|
||||
ret = Response(json.dumps({'success': True, 'newValue': book.title}),
|
||||
mimetype='application/json')
|
||||
elif param =='sort':
|
||||
book.sort = vals['value']
|
||||
# ToDo: edit books
|
||||
ret = Response(json.dumps({'success': True, 'newValue': book.sort}),
|
||||
mimetype='application/json')
|
||||
elif param =='authors':
|
||||
input_authors = vals['value'].split('&')
|
||||
input_authors = list(map(lambda it: it.strip().replace(',', '|'), input_authors))
|
||||
modify_database_object(input_authors, book.authors, db.Authors, calibre_db.session, 'author')
|
||||
sort_authors_list = list()
|
||||
for inp in input_authors:
|
||||
stored_author = calibre_db.session.query(db.Authors).filter(db.Authors.name == inp).first()
|
||||
if not stored_author:
|
||||
stored_author = helper.get_sorted_author(inp)
|
||||
else:
|
||||
stored_author = stored_author.sort
|
||||
sort_authors_list.append(helper.get_sorted_author(stored_author))
|
||||
sort_authors = ' & '.join(sort_authors_list)
|
||||
if book.author_sort != sort_authors:
|
||||
book.author_sort = sort_authors
|
||||
input_authors, __ = handle_author_on_edit(book, vals['value'], vals.get('checkA', None) == "true")
|
||||
helper.update_dir_stucture(book.id, config.config_calibre_dir, input_authors[0])
|
||||
ret = Response(json.dumps({'success': True,
|
||||
'newValue': ' & '.join([author.replace('|',',') for author in input_authors])}),
|
||||
mimetype='application/json')
|
||||
book.last_modified = datetime.utcnow()
|
||||
calibre_db.session.commit()
|
||||
return ""
|
||||
try:
|
||||
calibre_db.session.commit()
|
||||
# revert change for sort if automatic fields link is deactivated
|
||||
if param == 'title' and vals.get('checkT') == "false":
|
||||
book.sort = sort
|
||||
calibre_db.session.commit()
|
||||
except (OperationalError, IntegrityError) as e:
|
||||
calibre_db.session.rollback()
|
||||
log.error("Database error: %s", e)
|
||||
return ret
|
||||
|
||||
|
||||
@editbook.route("/ajax/sort_value/<field>/<int:bookid>")
|
||||
@login_required
|
||||
def get_sorted_entry(field, bookid):
|
||||
if field == 'title' or field == 'authors':
|
||||
if field in ['title', 'authors', 'sort', 'author_sort']:
|
||||
book = calibre_db.get_filtered_book(bookid)
|
||||
if book:
|
||||
if field == 'title':
|
||||
return json.dumps({'sort': book.sort})
|
||||
elif field == 'authors':
|
||||
return json.dumps({'author_sort': book.author_sort})
|
||||
if field == 'sort':
|
||||
return json.dumps({'sort': book.title})
|
||||
if field == 'author_sort':
|
||||
return json.dumps({'author_sort': book.author})
|
||||
return ""
|
||||
|
||||
|
||||
|
@ -1104,6 +1237,46 @@ def merge_list_book():
|
|||
element.format,
|
||||
element.uncompressed_size,
|
||||
to_name))
|
||||
delete_book(from_book.id,"", True) # json_resp =
|
||||
delete_book(from_book.id,"", True)
|
||||
return json.dumps({'success': True})
|
||||
return ""
|
||||
|
||||
@editbook.route("/ajax/xchange", methods=['POST'])
|
||||
@login_required
|
||||
@edit_required
|
||||
def table_xchange_author_title():
|
||||
vals = request.get_json().get('xchange')
|
||||
if vals:
|
||||
for val in vals:
|
||||
modif_date = False
|
||||
book = calibre_db.get_book(val)
|
||||
authors = book.title
|
||||
entries = calibre_db.order_authors(book)
|
||||
author_names = []
|
||||
for authr in entries.authors:
|
||||
author_names.append(authr.name.replace('|', ','))
|
||||
|
||||
title_change = handle_title_on_edit(book, " ".join(author_names))
|
||||
input_authors, authorchange = handle_author_on_edit(book, authors)
|
||||
if authorchange or title_change:
|
||||
edited_books_id = book.id
|
||||
modif_date = True
|
||||
|
||||
if config.config_use_google_drive:
|
||||
gdriveutils.updateGdriveCalibreFromLocal()
|
||||
|
||||
if edited_books_id:
|
||||
helper.update_dir_stucture(edited_books_id, config.config_calibre_dir, input_authors[0])
|
||||
if modif_date:
|
||||
book.last_modified = datetime.utcnow()
|
||||
try:
|
||||
calibre_db.session.commit()
|
||||
except (OperationalError, IntegrityError) as e:
|
||||
calibre_db.session.rollback()
|
||||
log.error("Database error: %s", e)
|
||||
return json.dumps({'success': False})
|
||||
|
||||
if config.config_use_google_drive:
|
||||
gdriveutils.updateGdriveCalibreFromLocal()
|
||||
return json.dumps({'success': True})
|
||||
return ""
|
||||
|
|
58
cps/epub.py
58
cps/epub.py
|
@ -87,18 +87,29 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
|||
lang = epub_metadata['language'].split('-', 1)[0].lower()
|
||||
epub_metadata['language'] = isoLanguages.get_lang3(lang)
|
||||
|
||||
series = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='calibre:series']/@content", namespaces=ns)
|
||||
if len(series) > 0:
|
||||
epub_metadata['series'] = series[0]
|
||||
else:
|
||||
epub_metadata['series'] = ''
|
||||
epub_metadata = parse_epbub_series(ns, tree, epub_metadata)
|
||||
|
||||
series_id = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='calibre:series_index']/@content", namespaces=ns)
|
||||
if len(series_id) > 0:
|
||||
epub_metadata['series_id'] = series_id[0]
|
||||
else:
|
||||
epub_metadata['series_id'] = '1'
|
||||
coverfile = parse_ebpub_cover(ns, tree, epubZip, coverpath, tmp_file_path)
|
||||
|
||||
if not epub_metadata['title']:
|
||||
title = original_file_name
|
||||
else:
|
||||
title = epub_metadata['title']
|
||||
|
||||
return BookMeta(
|
||||
file_path=tmp_file_path,
|
||||
extension=original_file_extension,
|
||||
title=title.encode('utf-8').decode('utf-8'),
|
||||
author=epub_metadata['creator'].encode('utf-8').decode('utf-8'),
|
||||
cover=coverfile,
|
||||
description=epub_metadata['description'],
|
||||
tags=epub_metadata['subject'].encode('utf-8').decode('utf-8'),
|
||||
series=epub_metadata['series'].encode('utf-8').decode('utf-8'),
|
||||
series_id=epub_metadata['series_id'].encode('utf-8').decode('utf-8'),
|
||||
languages=epub_metadata['language'],
|
||||
publisher="")
|
||||
|
||||
def parse_ebpub_cover(ns, tree, epubZip, coverpath, tmp_file_path):
|
||||
coversection = tree.xpath("/pkg:package/pkg:manifest/pkg:item[@id='cover-image']/@href", namespaces=ns)
|
||||
coverfile = None
|
||||
if len(coversection) > 0:
|
||||
|
@ -126,21 +137,18 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
|||
coverfile = extractCover(epubZip, filename, "", tmp_file_path)
|
||||
else:
|
||||
coverfile = extractCover(epubZip, coversection[0], coverpath, tmp_file_path)
|
||||
return coverfile
|
||||
|
||||
if not epub_metadata['title']:
|
||||
title = original_file_name
|
||||
def parse_epbub_series(ns, tree, epub_metadata):
|
||||
series = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='calibre:series']/@content", namespaces=ns)
|
||||
if len(series) > 0:
|
||||
epub_metadata['series'] = series[0]
|
||||
else:
|
||||
title = epub_metadata['title']
|
||||
epub_metadata['series'] = ''
|
||||
|
||||
return BookMeta(
|
||||
file_path=tmp_file_path,
|
||||
extension=original_file_extension,
|
||||
title=title.encode('utf-8').decode('utf-8'),
|
||||
author=epub_metadata['creator'].encode('utf-8').decode('utf-8'),
|
||||
cover=coverfile,
|
||||
description=epub_metadata['description'],
|
||||
tags=epub_metadata['subject'].encode('utf-8').decode('utf-8'),
|
||||
series=epub_metadata['series'].encode('utf-8').decode('utf-8'),
|
||||
series_id=epub_metadata['series_id'].encode('utf-8').decode('utf-8'),
|
||||
languages=epub_metadata['language'],
|
||||
publisher="")
|
||||
series_id = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='calibre:series_index']/@content", namespaces=ns)
|
||||
if len(series_id) > 0:
|
||||
epub_metadata['series_id'] = series_id[0]
|
||||
else:
|
||||
epub_metadata['series_id'] = '1'
|
||||
return epub_metadata
|
||||
|
|
|
@ -35,6 +35,7 @@ def error_http(error):
|
|||
error_code="Error {0}".format(error.code),
|
||||
error_name=error.name,
|
||||
issue=False,
|
||||
unconfigured=not config.db_configured,
|
||||
instance=config.config_calibre_web_title
|
||||
), error.code
|
||||
|
||||
|
@ -44,6 +45,7 @@ def internal_error(error):
|
|||
error_code="Internal Server Error",
|
||||
error_name=str(error),
|
||||
issue=True,
|
||||
unconfigured=False,
|
||||
error_stack=traceback.format_exc().split("\n"),
|
||||
instance=config.config_calibre_web_title
|
||||
), 500
|
||||
|
|
|
@ -29,7 +29,7 @@ def get_fb2_info(tmp_file_path, original_file_extension):
|
|||
'l': 'http://www.w3.org/1999/xlink',
|
||||
}
|
||||
|
||||
fb2_file = open(tmp_file_path)
|
||||
fb2_file = open(tmp_file_path, encoding="utf-8")
|
||||
tree = etree.fromstring(fb2_file.read().encode())
|
||||
|
||||
authors = tree.xpath('/fb:FictionBook/fb:description/fb:title-info/fb:author', namespaces=ns)
|
||||
|
|
|
@ -74,7 +74,7 @@ def google_drive_callback():
|
|||
f.write(credentials.to_json())
|
||||
except (ValueError, AttributeError) as error:
|
||||
log.error(error)
|
||||
return redirect(url_for('admin.configuration'))
|
||||
return redirect(url_for('admin.db_configuration'))
|
||||
|
||||
|
||||
@gdrive.route("/watch/subscribe")
|
||||
|
@ -99,7 +99,7 @@ def watch_gdrive():
|
|||
else:
|
||||
flash(reason['message'], category="error")
|
||||
|
||||
return redirect(url_for('admin.configuration'))
|
||||
return redirect(url_for('admin.db_configuration'))
|
||||
|
||||
|
||||
@gdrive.route("/watch/revoke")
|
||||
|
@ -115,7 +115,7 @@ def revoke_watch_gdrive():
|
|||
pass
|
||||
config.config_google_drive_watch_changes_response = {}
|
||||
config.save()
|
||||
return redirect(url_for('admin.configuration'))
|
||||
return redirect(url_for('admin.db_configuration'))
|
||||
|
||||
|
||||
@gdrive.route("/watch/callback", methods=['GET', 'POST'])
|
||||
|
@ -155,6 +155,6 @@ def on_received_watch_confirmation():
|
|||
# prevent error on windows, as os.rename does on existing files, also allow cross hdd move
|
||||
move(os.path.join(tmp_dir, "tmp_metadata.db"), dbpath)
|
||||
calibre_db.reconnect_db(config, ub.app_DB_path)
|
||||
except Exception as e:
|
||||
log.debug_or_exception(e)
|
||||
except Exception as ex:
|
||||
log.debug_or_exception(ex)
|
||||
return ''
|
||||
|
|
|
@ -34,6 +34,7 @@ try:
|
|||
except ImportError:
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.exc import OperationalError, InvalidRequestError
|
||||
from sqlalchemy.sql.expression import text
|
||||
|
||||
try:
|
||||
from apiclient import errors
|
||||
|
@ -168,7 +169,7 @@ class PermissionAdded(Base):
|
|||
def migrate():
|
||||
if not engine.dialect.has_table(engine.connect(), "permissions_added"):
|
||||
PermissionAdded.__table__.create(bind = engine)
|
||||
for sql in session.execute("select sql from sqlite_master where type='table'"):
|
||||
for sql in session.execute(text("select sql from sqlite_master where type='table'")):
|
||||
if 'CREATE TABLE gdrive_ids' in sql[0]:
|
||||
currUniqueConstraint = 'UNIQUE (gdrive_id)'
|
||||
if currUniqueConstraint in sql[0]:
|
||||
|
@ -202,8 +203,8 @@ def getDrive(drive=None, gauth=None):
|
|||
gauth.Refresh()
|
||||
except RefreshError as e:
|
||||
log.error("Google Drive error: %s", e)
|
||||
except Exception as e:
|
||||
log.debug_or_exception(e)
|
||||
except Exception as ex:
|
||||
log.debug_or_exception(ex)
|
||||
else:
|
||||
# Initialize the saved creds
|
||||
gauth.Authorize()
|
||||
|
@ -221,7 +222,7 @@ def listRootFolders():
|
|||
drive = getDrive(Gdrive.Instance().drive)
|
||||
folder = "'root' in parents and mimeType = 'application/vnd.google-apps.folder' and trashed = false"
|
||||
fileList = drive.ListFile({'q': folder}).GetList()
|
||||
except (ServerNotFoundError, ssl.SSLError) as e:
|
||||
except (ServerNotFoundError, ssl.SSLError, RefreshError) as e:
|
||||
log.info("GDrive Error %s" % e)
|
||||
fileList = []
|
||||
return fileList
|
||||
|
@ -257,7 +258,12 @@ def getEbooksFolderId(drive=None):
|
|||
log.error('Error gDrive, root ID not found')
|
||||
gDriveId.path = '/'
|
||||
session.merge(gDriveId)
|
||||
session.commit()
|
||||
try:
|
||||
session.commit()
|
||||
except OperationalError as ex:
|
||||
log.error("gdrive.db DB is not Writeable")
|
||||
log.debug('Database error: %s', ex)
|
||||
session.rollback()
|
||||
return gDriveId.gdrive_id
|
||||
|
||||
|
||||
|
@ -272,37 +278,42 @@ def getFile(pathId, fileName, drive):
|
|||
|
||||
def getFolderId(path, drive):
|
||||
# drive = getDrive(drive)
|
||||
currentFolderId = getEbooksFolderId(drive)
|
||||
sqlCheckPath = path if path[-1] == '/' else path + '/'
|
||||
storedPathName = session.query(GdriveId).filter(GdriveId.path == sqlCheckPath).first()
|
||||
try:
|
||||
currentFolderId = getEbooksFolderId(drive)
|
||||
sqlCheckPath = path if path[-1] == '/' else path + '/'
|
||||
storedPathName = session.query(GdriveId).filter(GdriveId.path == sqlCheckPath).first()
|
||||
|
||||
if not storedPathName:
|
||||
dbChange = False
|
||||
s = path.split('/')
|
||||
for i, x in enumerate(s):
|
||||
if len(x) > 0:
|
||||
currentPath = "/".join(s[:i+1])
|
||||
if currentPath[-1] != '/':
|
||||
currentPath = currentPath + '/'
|
||||
storedPathName = session.query(GdriveId).filter(GdriveId.path == currentPath).first()
|
||||
if storedPathName:
|
||||
currentFolderId = storedPathName.gdrive_id
|
||||
else:
|
||||
currentFolder = getFolderInFolder(currentFolderId, x, drive)
|
||||
if currentFolder:
|
||||
gDriveId = GdriveId()
|
||||
gDriveId.gdrive_id = currentFolder['id']
|
||||
gDriveId.path = currentPath
|
||||
session.merge(gDriveId)
|
||||
dbChange = True
|
||||
currentFolderId = currentFolder['id']
|
||||
if not storedPathName:
|
||||
dbChange = False
|
||||
s = path.split('/')
|
||||
for i, x in enumerate(s):
|
||||
if len(x) > 0:
|
||||
currentPath = "/".join(s[:i+1])
|
||||
if currentPath[-1] != '/':
|
||||
currentPath = currentPath + '/'
|
||||
storedPathName = session.query(GdriveId).filter(GdriveId.path == currentPath).first()
|
||||
if storedPathName:
|
||||
currentFolderId = storedPathName.gdrive_id
|
||||
else:
|
||||
currentFolderId = None
|
||||
break
|
||||
if dbChange:
|
||||
session.commit()
|
||||
else:
|
||||
currentFolderId = storedPathName.gdrive_id
|
||||
currentFolder = getFolderInFolder(currentFolderId, x, drive)
|
||||
if currentFolder:
|
||||
gDriveId = GdriveId()
|
||||
gDriveId.gdrive_id = currentFolder['id']
|
||||
gDriveId.path = currentPath
|
||||
session.merge(gDriveId)
|
||||
dbChange = True
|
||||
currentFolderId = currentFolder['id']
|
||||
else:
|
||||
currentFolderId = None
|
||||
break
|
||||
if dbChange:
|
||||
session.commit()
|
||||
else:
|
||||
currentFolderId = storedPathName.gdrive_id
|
||||
except OperationalError as ex:
|
||||
log.error("gdrive.db DB is not Writeable")
|
||||
log.debug('Database error: %s', ex)
|
||||
session.rollback()
|
||||
return currentFolderId
|
||||
|
||||
|
||||
|
@ -346,7 +357,7 @@ def moveGdriveFolderRemote(origin_file, target_folder):
|
|||
addParents=gFileTargetDir['id'],
|
||||
removeParents=previous_parents,
|
||||
fields='id, parents').execute()
|
||||
# if previous_parents has no childs anymore, delete original fileparent
|
||||
# if previous_parents has no children anymore, delete original fileparent
|
||||
if len(children['items']) == 1:
|
||||
deleteDatabaseEntry(previous_parents)
|
||||
drive.auth.service.files().delete(fileId=previous_parents).execute()
|
||||
|
@ -497,8 +508,8 @@ def getChangeById (drive, change_id):
|
|||
except (errors.HttpError) as error:
|
||||
log.error(error)
|
||||
return None
|
||||
except Exception as e:
|
||||
log.error(e)
|
||||
except Exception as ex:
|
||||
log.error(ex)
|
||||
return None
|
||||
|
||||
|
||||
|
@ -507,9 +518,10 @@ def deleteDatabaseOnChange():
|
|||
try:
|
||||
session.query(GdriveId).delete()
|
||||
session.commit()
|
||||
except (OperationalError, InvalidRequestError):
|
||||
except (OperationalError, InvalidRequestError) as ex:
|
||||
session.rollback()
|
||||
log.info(u"GDrive DB is not Writeable")
|
||||
log.debug('Database error: %s', ex)
|
||||
log.error(u"GDrive DB is not Writeable")
|
||||
|
||||
|
||||
def updateGdriveCalibreFromLocal():
|
||||
|
@ -524,13 +536,23 @@ def updateDatabaseOnEdit(ID,newPath):
|
|||
storedPathName = session.query(GdriveId).filter(GdriveId.gdrive_id == ID).first()
|
||||
if storedPathName:
|
||||
storedPathName.path = sqlCheckPath
|
||||
session.commit()
|
||||
try:
|
||||
session.commit()
|
||||
except OperationalError as ex:
|
||||
log.error("gdrive.db DB is not Writeable")
|
||||
log.debug('Database error: %s', ex)
|
||||
session.rollback()
|
||||
|
||||
|
||||
# Deletes the hashes in database of deleted book
|
||||
def deleteDatabaseEntry(ID):
|
||||
session.query(GdriveId).filter(GdriveId.gdrive_id == ID).delete()
|
||||
session.commit()
|
||||
try:
|
||||
session.commit()
|
||||
except OperationalError as ex:
|
||||
log.error("gdrive.db DB is not Writeable")
|
||||
log.debug('Database error: %s', ex)
|
||||
session.rollback()
|
||||
|
||||
|
||||
# Gets cover file from gdrive
|
||||
|
@ -547,7 +569,12 @@ def get_cover_via_gdrive(cover_path):
|
|||
permissionAdded = PermissionAdded()
|
||||
permissionAdded.gdrive_id = df['id']
|
||||
session.add(permissionAdded)
|
||||
session.commit()
|
||||
try:
|
||||
session.commit()
|
||||
except OperationalError as ex:
|
||||
log.error("gdrive.db DB is not Writeable")
|
||||
log.debug('Database error: %s', ex)
|
||||
session.rollback()
|
||||
return df.metadata.get('webContentLink')
|
||||
else:
|
||||
return None
|
||||
|
|
|
@ -35,9 +35,10 @@ from babel.units import format_unit
|
|||
from flask import send_from_directory, make_response, redirect, abort, url_for
|
||||
from flask_babel import gettext as _
|
||||
from flask_login import current_user
|
||||
from sqlalchemy.sql.expression import true, false, and_, text
|
||||
from sqlalchemy.sql.expression import true, false, and_, text, func
|
||||
from werkzeug.datastructures import Headers
|
||||
from werkzeug.security import generate_password_hash
|
||||
from markupsafe import escape
|
||||
|
||||
try:
|
||||
from urllib.parse import quote
|
||||
|
@ -98,10 +99,11 @@ def convert_book_format(book_id, calibrepath, old_book_format, new_book_format,
|
|||
settings['body'] = _(u'This e-mail has been sent via Calibre-Web.')
|
||||
else:
|
||||
settings = dict()
|
||||
txt = (u"%s -> %s: %s" % (
|
||||
old_book_format,
|
||||
new_book_format,
|
||||
"<a href=\"" + url_for('web.show_book', book_id=book.id) + "\">" + book.title + "</a>"))
|
||||
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book.id), escape(book.title)) # prevent xss
|
||||
txt = u"{} -> {}: {}".format(
|
||||
old_book_format.upper(),
|
||||
new_book_format.upper(),
|
||||
link)
|
||||
settings['old_book_format'] = old_book_format
|
||||
settings['new_book_format'] = new_book_format
|
||||
WorkerThread.add(user_id, TaskConvert(file_path, book.id, txt, settings, kindle_mail, user_id))
|
||||
|
@ -215,9 +217,11 @@ def send_mail(book_id, book_format, convert, kindle_mail, calibrepath, user_id):
|
|||
for entry in iter(book.data):
|
||||
if entry.format.upper() == book_format.upper():
|
||||
converted_file_name = entry.name + '.' + book_format.lower()
|
||||
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book_id), escape(book.title))
|
||||
EmailText = _(u"%(book)s send to Kindle", book=link)
|
||||
WorkerThread.add(user_id, TaskEmail(_(u"Send to Kindle"), book.path, converted_file_name,
|
||||
config.get_mail_settings(), kindle_mail,
|
||||
_(u"E-mail: %(book)s", book=book.title), _(u'This e-mail has been sent via Calibre-Web.')))
|
||||
EmailText, _(u'This e-mail has been sent via Calibre-Web.')))
|
||||
return
|
||||
return _(u"The requested file could not be read. Maybe wrong permissions?")
|
||||
|
||||
|
@ -231,16 +235,14 @@ def get_valid_filename(value, replace_whitespace=True):
|
|||
value = value[:-1]+u'_'
|
||||
value = value.replace("/", "_").replace(":", "_").strip('\0')
|
||||
if use_unidecode:
|
||||
value = (unidecode.unidecode(value))
|
||||
if not config.config_unicode_filename:
|
||||
value = (unidecode.unidecode(value))
|
||||
else:
|
||||
value = value.replace(u'§', u'SS')
|
||||
value = value.replace(u'ß', u'ss')
|
||||
value = unicodedata.normalize('NFKD', value)
|
||||
re_slugify = re.compile(r'[\W\s-]', re.UNICODE)
|
||||
if isinstance(value, str): # Python3 str, Python2 unicode
|
||||
value = re_slugify.sub('', value)
|
||||
else:
|
||||
value = unicode(re_slugify.sub('', value))
|
||||
value = re_slugify.sub('', value)
|
||||
if replace_whitespace:
|
||||
# *+:\"/<>? are replaced by _
|
||||
value = re.sub(r'[*+:\\\"/<>?]+', u'_', value, flags=re.U)
|
||||
|
@ -249,10 +251,7 @@ def get_valid_filename(value, replace_whitespace=True):
|
|||
value = value[:128].strip()
|
||||
if not value:
|
||||
raise ValueError("Filename cannot be empty")
|
||||
if sys.version_info.major == 3:
|
||||
return value
|
||||
else:
|
||||
return value.decode('utf-8')
|
||||
return value
|
||||
|
||||
|
||||
def split_authors(values):
|
||||
|
@ -330,11 +329,12 @@ def delete_book_file(book, calibrepath, book_format=None):
|
|||
except (IOError, OSError) as e:
|
||||
log.error("Deleting authorpath for book %s failed: %s", book.id, e)
|
||||
return True, None
|
||||
else:
|
||||
log.error("Deleting book %s failed, book path not valid: %s", book.id, book.path)
|
||||
return True, _("Deleting book %(id)s, book path not valid: %(path)s",
|
||||
id=book.id,
|
||||
path=book.path)
|
||||
|
||||
log.error("Deleting book %s from database only, book path in database not valid: %s",
|
||||
book.id, book.path)
|
||||
return True, _("Deleting book %(id)s from database only, book path in database not valid: %(path)s",
|
||||
id=book.id,
|
||||
path=book.path)
|
||||
|
||||
|
||||
# Moves files in file storage during author/title rename, or from temp dir to file storage
|
||||
|
@ -383,7 +383,7 @@ def update_dir_structure_file(book_id, calibrepath, first_author, orignal_filepa
|
|||
# os.unlink(os.path.normcase(os.path.join(dir_name, file)))
|
||||
# change location in database to new author/title path
|
||||
localbook.path = os.path.join(new_authordir, new_titledir).replace('\\','/')
|
||||
except OSError as ex:
|
||||
except (OSError) as ex:
|
||||
log.error("Rename title from: %s to %s: %s", path, new_path, ex)
|
||||
log.debug(ex, exc_info=True)
|
||||
return _("Rename title from: '%(src)s' to '%(dest)s' failed with error: %(error)s",
|
||||
|
@ -398,7 +398,7 @@ def update_dir_structure_file(book_id, calibrepath, first_author, orignal_filepa
|
|||
file_format.name = new_name
|
||||
if not orignal_filepath and len(os.listdir(os.path.dirname(path))) == 0:
|
||||
shutil.rmtree(os.path.dirname(path))
|
||||
except OSError as ex:
|
||||
except (OSError) as ex:
|
||||
log.error("Rename file in path %s to %s: %s", new_path, new_name, ex)
|
||||
log.debug(ex, exc_info=True)
|
||||
return _("Rename file in path '%(src)s' to '%(dest)s' failed with error: %(error)s",
|
||||
|
@ -481,8 +481,8 @@ def reset_password(user_id):
|
|||
password = generate_random_password()
|
||||
existing_user.password = generate_password_hash(password)
|
||||
ub.session.commit()
|
||||
send_registration_mail(existing_user.email, existing_user.nickname, password, True)
|
||||
return 1, existing_user.nickname
|
||||
send_registration_mail(existing_user.email, existing_user.name, password, True)
|
||||
return 1, existing_user.name
|
||||
except Exception:
|
||||
ub.session.rollback()
|
||||
return 0, None
|
||||
|
@ -499,11 +499,37 @@ def generate_random_password():
|
|||
|
||||
def uniq(inpt):
|
||||
output = []
|
||||
inpt = [ " ".join(inp.split()) for inp in inpt]
|
||||
for x in inpt:
|
||||
if x not in output:
|
||||
output.append(x)
|
||||
return output
|
||||
|
||||
def check_email(email):
|
||||
email = valid_email(email)
|
||||
if ub.session.query(ub.User).filter(func.lower(ub.User.email) == email.lower()).first():
|
||||
log.error(u"Found an existing account for this e-mail address")
|
||||
raise Exception(_(u"Found an existing account for this e-mail address"))
|
||||
return email
|
||||
|
||||
|
||||
def check_username(username):
|
||||
username = username.strip()
|
||||
if ub.session.query(ub.User).filter(func.lower(ub.User.name) == username.lower()).scalar():
|
||||
log.error(u"This username is already taken")
|
||||
raise Exception (_(u"This username is already taken"))
|
||||
return username
|
||||
|
||||
|
||||
def valid_email(email):
|
||||
email = email.strip()
|
||||
# Regex according to https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/email#validation
|
||||
if not re.search(r"^[\w.!#$%&'*+\\/=?^_`{|}~-]+@[\w](?:[\w-]{0,61}[\w])?(?:\.[\w](?:[\w-]{0,61}[\w])?)*$",
|
||||
email):
|
||||
log.error(u"Invalid e-mail address format")
|
||||
raise Exception(_(u"Invalid e-mail address format"))
|
||||
return email
|
||||
|
||||
# ################################# External interface #################################
|
||||
|
||||
|
||||
|
@ -740,6 +766,7 @@ def do_download_file(book, book_format, client, data, headers):
|
|||
# ToDo Check headers parameter
|
||||
for element in headers:
|
||||
response.headers[element[0]] = element[1]
|
||||
log.info('Downloading file: {}'.format(os.path.join(filename, data.name + "." + book_format)))
|
||||
return response
|
||||
|
||||
##################################
|
||||
|
@ -756,12 +783,11 @@ def check_unrar(unrarLocation):
|
|||
if sys.version_info < (3, 0):
|
||||
unrarLocation = unrarLocation.encode(sys.getfilesystemencoding())
|
||||
unrarLocation = [unrarLocation]
|
||||
for lines in process_wait(unrarLocation):
|
||||
value = re.search('UNRAR (.*) freeware', lines, re.IGNORECASE)
|
||||
if value:
|
||||
version = value.group(1)
|
||||
log.debug("unrar version %s", version)
|
||||
break
|
||||
value = process_wait(unrarLocation, pattern='UNRAR (.*) freeware')
|
||||
if value:
|
||||
version = value.group(1)
|
||||
log.debug("unrar version %s", version)
|
||||
|
||||
except (OSError, UnicodeDecodeError) as err:
|
||||
log.debug_or_exception(err)
|
||||
return _('Error excecuting UnRar')
|
||||
|
@ -779,7 +805,6 @@ def json_serial(obj):
|
|||
'seconds': obj.seconds,
|
||||
'microseconds': obj.microseconds,
|
||||
}
|
||||
# return obj.isoformat()
|
||||
raise TypeError("Type %s not serializable" % type(obj))
|
||||
|
||||
|
||||
|
@ -804,7 +829,7 @@ def format_runtime(runtime):
|
|||
def render_task_status(tasklist):
|
||||
renderedtasklist = list()
|
||||
for __, user, __, task in tasklist:
|
||||
if user == current_user.nickname or current_user.role_admin():
|
||||
if user == current_user.name or current_user.role_admin():
|
||||
ret = {}
|
||||
if task.start_time:
|
||||
ret['starttime'] = format_datetime(task.start_time, format='short', locale=get_locale())
|
||||
|
@ -825,7 +850,7 @@ def render_task_status(tasklist):
|
|||
|
||||
ret['taskMessage'] = "{}: {}".format(_(task.name), task.message)
|
||||
ret['progress'] = "{} %".format(int(task.progress * 100))
|
||||
ret['user'] = user
|
||||
ret['user'] = escape(user) # prevent xss
|
||||
renderedtasklist.append(ret)
|
||||
|
||||
return renderedtasklist
|
||||
|
@ -842,8 +867,8 @@ def tags_filters():
|
|||
# checks if domain is in database (including wildcards)
|
||||
# example SELECT * FROM @TABLE WHERE 'abcdefg' LIKE Name;
|
||||
# from https://code.luasoftware.com/tutorials/flask/execute-raw-sql-in-flask-sqlalchemy/
|
||||
# in all calls the email address is checked for validity
|
||||
def check_valid_domain(domain_text):
|
||||
# domain_text = domain_text.split('@', 1)[-1].lower()
|
||||
sql = "SELECT * FROM registration WHERE (:domain LIKE domain and allow = 1);"
|
||||
result = ub.session.query(ub.Registration).from_statement(text(sql)).params(domain=domain_text).all()
|
||||
if not len(result):
|
||||
|
@ -877,6 +902,7 @@ def get_download_link(book_id, book_format, client):
|
|||
if book:
|
||||
data1 = calibre_db.get_book_format(book.id, book_format.upper())
|
||||
else:
|
||||
log.error("Book id {} not found for downloading".format(book_id))
|
||||
abort(404)
|
||||
if data1:
|
||||
# collect downloaded books only for registered user and not for anonymous user
|
||||
|
@ -884,8 +910,8 @@ def get_download_link(book_id, book_format, client):
|
|||
ub.update_download(book_id, int(current_user.id))
|
||||
file_name = book.title
|
||||
if len(book.authors) > 0:
|
||||
file_name = book.authors[0].name + '_' + file_name
|
||||
file_name = get_valid_filename(file_name)
|
||||
file_name = file_name + ' - ' + book.authors[0].name
|
||||
file_name = get_valid_filename(file_name, replace_whitespace=False)
|
||||
headers = Headers()
|
||||
headers["Content-Type"] = mimetypes.types_map.get('.' + book_format, "application/octet-stream")
|
||||
headers["Content-Disposition"] = "attachment; filename=%s.%s; filename*=UTF-8''%s.%s" % (
|
||||
|
|
|
@ -63,7 +63,7 @@ def get_language_codes(locale, language_names, remainder=None):
|
|||
if v in language_names:
|
||||
lang.append(k)
|
||||
language_names.remove(v)
|
||||
if remainder is not None:
|
||||
if remainder is not None and language_names:
|
||||
remainder.extend(language_names)
|
||||
return lang
|
||||
|
||||
|
|
|
@ -5007,6 +5007,379 @@ LANGUAGE_NAMES = {
|
|||
"zxx": "brak kontekstu językowego",
|
||||
"zza": "zazaki"
|
||||
},
|
||||
"pt_BR": {
|
||||
"abk": "Abcázio",
|
||||
"ace": "Achém",
|
||||
"ach": "Acoli",
|
||||
"ada": "Adangme",
|
||||
"ady": "Adyghe",
|
||||
"aar": "Afar",
|
||||
"afh": "Afrihili",
|
||||
"afr": "Africânder",
|
||||
"ain": "Ainu (Japão)",
|
||||
"aka": "Akan",
|
||||
"akk": "Acadiano",
|
||||
"sqi": "Albanês",
|
||||
"ale": "Aleúte",
|
||||
"amh": "Amárico",
|
||||
"anp": "Angika",
|
||||
"ara": "Arabic",
|
||||
"arg": "Aragonese",
|
||||
"arp": "Arapaho",
|
||||
"arw": "Arawak",
|
||||
"hye": "Armênio",
|
||||
"asm": "Assamese",
|
||||
"ast": "Asturian",
|
||||
"ava": "Avaric",
|
||||
"ave": "Avestan",
|
||||
"awa": "Awadhi",
|
||||
"aym": "Aymara",
|
||||
"aze": "Azerbaijano",
|
||||
"ban": "Balinês",
|
||||
"bal": "Balúchi",
|
||||
"bam": "Bambara",
|
||||
"bas": "Basa (Cameroon)",
|
||||
"bak": "Bashkir",
|
||||
"eus": "Basque",
|
||||
"bej": "Beja",
|
||||
"bel": "Belarusian",
|
||||
"bem": "Bemba (Zambia)",
|
||||
"ben": "Bengali",
|
||||
"bho": "Bhojpuri",
|
||||
"bik": "Bikol",
|
||||
"byn": "Bilin",
|
||||
"bin": "Bini",
|
||||
"bis": "Bislama",
|
||||
"zbl": "Blissymbols",
|
||||
"bos": "Bosnian",
|
||||
"bra": "Braj",
|
||||
"bre": "Bretão",
|
||||
"bug": "Buginese",
|
||||
"bul": "Búlgaro",
|
||||
"bua": "Buriat",
|
||||
"mya": "Birmanês",
|
||||
"cad": "Caddo",
|
||||
"cat": "Catalão",
|
||||
"ceb": "Cebuano",
|
||||
"chg": "Chagatai",
|
||||
"cha": "Chamorro",
|
||||
"che": "Chechen",
|
||||
"chr": "Cheroqui",
|
||||
"chy": "Cheyenne",
|
||||
"chb": "Chibcha",
|
||||
"zho": "Chinês",
|
||||
"chn": "Chinook jargon",
|
||||
"chp": "Chipewyan",
|
||||
"cho": "Choctaw",
|
||||
"chk": "Chuukese",
|
||||
"chv": "Chuvash",
|
||||
"cop": "Coptic",
|
||||
"cor": "Cornish",
|
||||
"cos": "Corsican",
|
||||
"cre": "Cree",
|
||||
"mus": "Creek",
|
||||
"hrv": "Croata",
|
||||
"ces": "Czech",
|
||||
"dak": "Dacota",
|
||||
"dan": "Danish",
|
||||
"dar": "Dargwa",
|
||||
"del": "Delaware",
|
||||
"div": "Dhivehi",
|
||||
"din": "Dinka",
|
||||
"doi": "Dogri (macrolanguage)",
|
||||
"dgr": "Dogrib",
|
||||
"dua": "Duala",
|
||||
"nld": "Holandês",
|
||||
"dyu": "Dyula",
|
||||
"dzo": "Dzongkha",
|
||||
"efi": "Efik",
|
||||
"egy": "Egyptian (Ancient)",
|
||||
"eka": "Ekajuk",
|
||||
"elx": "Elamite",
|
||||
"eng": "Inglês",
|
||||
"myv": "Erzya",
|
||||
"epo": "Esperanto",
|
||||
"est": "Estónio",
|
||||
"ewe": "Ewe",
|
||||
"ewo": "Ewondo",
|
||||
"fan": "Fang (Equatorial Guinea)",
|
||||
"fat": "Fanti",
|
||||
"fao": "Faroese",
|
||||
"fij": "Fijian",
|
||||
"fil": "Filipino",
|
||||
"fin": "Finlandês",
|
||||
"fon": "Fon",
|
||||
"fra": "Francês",
|
||||
"fur": "Friuliano",
|
||||
"ful": "Fulah",
|
||||
"gaa": "Ga",
|
||||
"glg": "Galician",
|
||||
"lug": "Ganda",
|
||||
"gay": "Gayo",
|
||||
"gba": "Gbaya (Central African Republic)",
|
||||
"gez": "Geez",
|
||||
"kat": "Georgiano",
|
||||
"deu": "Alemão",
|
||||
"gil": "Gilbertês",
|
||||
"gon": "Gondi",
|
||||
"gor": "Gorontalo",
|
||||
"got": "Gótico",
|
||||
"grb": "Grebo",
|
||||
"grn": "Guarani",
|
||||
"guj": "Guzerate",
|
||||
"gwi": "Gwichʼin",
|
||||
"hai": "Haida",
|
||||
"hau": "Hauçá",
|
||||
"haw": "Havaiano",
|
||||
"heb": "Hebraico",
|
||||
"her": "Herero",
|
||||
"hil": "Hiligaynon",
|
||||
"hin": "Hindi",
|
||||
"hmo": "Hiri Motu",
|
||||
"hit": "Hitita",
|
||||
"hmn": "Hmong",
|
||||
"hun": "Húngaro",
|
||||
"hup": "Hupa",
|
||||
"iba": "Iban",
|
||||
"isl": "Islandês",
|
||||
"ido": "Ido",
|
||||
"ibo": "Igbo",
|
||||
"ilo": "Ilocano",
|
||||
"ind": "Indonésio",
|
||||
"inh": "Ingush",
|
||||
"ina": "Interlingua (International Auxiliary Language Association)",
|
||||
"ile": "Interlingue",
|
||||
"iku": "Inuktitut",
|
||||
"ipk": "Inupiaq",
|
||||
"gle": "Irlandês",
|
||||
"ita": "Italiano",
|
||||
"jpn": "Japanese",
|
||||
"jav": "Javanês",
|
||||
"jrb": "Judeo-Arabic",
|
||||
"jpr": "Judeo-Persian",
|
||||
"kbd": "Kabardian",
|
||||
"kab": "Kabyle",
|
||||
"kac": "Kachin",
|
||||
"kal": "Kalaallisut",
|
||||
"xal": "Kalmyk",
|
||||
"kam": "Kamba (Quênia)",
|
||||
"kan": "Canarês",
|
||||
"kau": "Kanuri",
|
||||
"kaa": "Kara-Kalpak",
|
||||
"krc": "Karachay-Balkar",
|
||||
"krl": "Karelian",
|
||||
"kas": "Kashmiri",
|
||||
"csb": "Kashubian",
|
||||
"kaw": "Kawi",
|
||||
"kaz": "Cazaque",
|
||||
"kha": "Khasi",
|
||||
"kho": "Khotanese",
|
||||
"kik": "Quicuio",
|
||||
"kmb": "Quimbundo",
|
||||
"kin": "Kinyarwanda",
|
||||
"kir": "Quirguiz",
|
||||
"tlh": "Klingon",
|
||||
"kom": "Komi",
|
||||
"kon": "Quicongo",
|
||||
"kok": "Konkani (macrolanguage)",
|
||||
"kor": "Coreano",
|
||||
"kos": "Kosraean",
|
||||
"kpe": "Kpelle",
|
||||
"kua": "Kuanyama",
|
||||
"kum": "Kumyk",
|
||||
"kur": "Kurdish",
|
||||
"kru": "Kurukh",
|
||||
"kut": "Kutenai",
|
||||
"lad": "Ladino",
|
||||
"lah": "Lahnda",
|
||||
"lam": "Lamba",
|
||||
"lao": "Laosiano",
|
||||
"lat": "Latin",
|
||||
"lav": "Letão",
|
||||
"lez": "Lezghian",
|
||||
"lim": "Limburgan",
|
||||
"lin": "Lingala",
|
||||
"lit": "Lituano",
|
||||
"jbo": "Lojban",
|
||||
"loz": "Lozi",
|
||||
"lub": "Luba-Catanga",
|
||||
"lua": "Luba-Lulua",
|
||||
"lui": "Luiseno",
|
||||
"smj": "Lule Sami",
|
||||
"lun": "Lunda",
|
||||
"luo": "Luo (Kenya and Tanzania)",
|
||||
"lus": "Lushai",
|
||||
"ltz": "Luxembourgish",
|
||||
"mkd": "Macedónio",
|
||||
"mad": "Madurese",
|
||||
"mag": "Magahi",
|
||||
"mai": "Maithili",
|
||||
"mak": "Makasar",
|
||||
"mlg": "Malgaxe",
|
||||
"msa": "Malay (macrolanguage)",
|
||||
"mal": "Malayalam",
|
||||
"mlt": "Maltese",
|
||||
"mnc": "Manchu",
|
||||
"mdr": "Mandar",
|
||||
"man": "Mandinga",
|
||||
"mni": "Manipuri",
|
||||
"glv": "Manx",
|
||||
"mri": "Maori",
|
||||
"arn": "Mapudungun",
|
||||
"mar": "Marata",
|
||||
"chm": "Mari (Russia)",
|
||||
"mah": "Marshallese",
|
||||
"mwr": "Marwari",
|
||||
"mas": "Masai",
|
||||
"men": "Mende (Sierra Leone)",
|
||||
"mic": "Mi'kmaq",
|
||||
"min": "Minangkabau",
|
||||
"mwl": "Mirandês",
|
||||
"moh": "Mohawk",
|
||||
"mdf": "Mocsa",
|
||||
"lol": "Mongo",
|
||||
"mon": "Mongolian",
|
||||
"mos": "Mossi",
|
||||
"mul": "Múltiplos idiomas",
|
||||
"nqo": "N'Ko",
|
||||
"nau": "Nauruano",
|
||||
"nav": "Navajo",
|
||||
"ndo": "Ndonga",
|
||||
"nap": "Neapolitan",
|
||||
"nia": "Nias",
|
||||
"niu": "Niueano",
|
||||
"zxx": "Sem conteúdo linguistico",
|
||||
"nog": "Nogai",
|
||||
"nor": "Norueguês",
|
||||
"nob": "Norueguês, Dano",
|
||||
"nno": "Norueguês, Novo",
|
||||
"nym": "Nyamwezi",
|
||||
"nya": "Nyanja",
|
||||
"nyn": "Nyankole",
|
||||
"nyo": "Nyoro",
|
||||
"nzi": "Nzima",
|
||||
"oci": "Occitan (post 1500)",
|
||||
"oji": "Ojibwa",
|
||||
"orm": "Oromo",
|
||||
"osa": "Osage",
|
||||
"oss": "Ossetian",
|
||||
"pal": "Pálavi",
|
||||
"pau": "Palauano",
|
||||
"pli": "Pali",
|
||||
"pam": "Pampanga",
|
||||
"pag": "Pangasinense",
|
||||
"pan": "Panjabi",
|
||||
"pap": "Papiamento",
|
||||
"fas": "Persian",
|
||||
"phn": "Fenício",
|
||||
"pon": "Pohnpeian",
|
||||
"pol": "Polaco",
|
||||
"por": "Português",
|
||||
"pus": "Pushto",
|
||||
"que": "Quíchua",
|
||||
"raj": "Rajastani",
|
||||
"rap": "Rapanui",
|
||||
"ron": "Romeno",
|
||||
"roh": "Romansh",
|
||||
"rom": "Romany",
|
||||
"run": "Rundi",
|
||||
"rus": "Russo",
|
||||
"smo": "Samoan",
|
||||
"sad": "Sandawe",
|
||||
"sag": "Sango",
|
||||
"san": "Sanskrit",
|
||||
"sat": "Santali",
|
||||
"srd": "Sardinian",
|
||||
"sas": "Sasak",
|
||||
"sco": "Scots",
|
||||
"sel": "Selkup",
|
||||
"srp": "Sérvio",
|
||||
"srr": "Serere",
|
||||
"shn": "Shan",
|
||||
"sna": "Shona",
|
||||
"scn": "Sicilian",
|
||||
"sid": "Sidamo",
|
||||
"bla": "Siksika",
|
||||
"snd": "Sindi",
|
||||
"sin": "Cingalês",
|
||||
"den": "Slave (Athapascan)",
|
||||
"slk": "Eslovaco",
|
||||
"slv": "Esloveno",
|
||||
"sog": "Sogdian",
|
||||
"som": "Somali",
|
||||
"snk": "Soninke",
|
||||
"spa": "Espanhol",
|
||||
"srn": "Sranan Tongo",
|
||||
"suk": "Sukuma",
|
||||
"sux": "Sumerian",
|
||||
"sun": "Sudanês",
|
||||
"sus": "Sosso",
|
||||
"swa": "Swahili (macrolanguage)",
|
||||
"ssw": "Swati",
|
||||
"swe": "Sueco",
|
||||
"syr": "Siríaco",
|
||||
"tgl": "Tagaloge",
|
||||
"tah": "Tahitian",
|
||||
"tgk": "Tajik",
|
||||
"tmh": "Tamaxeque",
|
||||
"tam": "Tamil",
|
||||
"tat": "Tatar",
|
||||
"tel": "Telugu",
|
||||
"ter": "Tereno",
|
||||
"tet": "Tétum",
|
||||
"tha": "Tailandês",
|
||||
"bod": "Tibetano",
|
||||
"tig": "Tigre",
|
||||
"tir": "Tigrinya",
|
||||
"tem": "Timne",
|
||||
"tiv": "Tiv",
|
||||
"tli": "Tlingit",
|
||||
"tpi": "Tok Pisin",
|
||||
"tkl": "Toquelauano",
|
||||
"tog": "Toganês (Nyasa)",
|
||||
"ton": "Tonga (ilhas tonga)",
|
||||
"tsi": "Tsimshian",
|
||||
"tso": "Tsonga",
|
||||
"tsn": "Tswana",
|
||||
"tum": "Tumbuka",
|
||||
"tur": "Turco",
|
||||
"tuk": "Turcomano",
|
||||
"tvl": "Tuvaluano",
|
||||
"tyv": "Tuvinian",
|
||||
"twi": "Twi",
|
||||
"udm": "Udmurt",
|
||||
"uga": "Ugarítico",
|
||||
"uig": "Uighur",
|
||||
"ukr": "Ucraniano",
|
||||
"umb": "Umbundu",
|
||||
"mis": "Idiomas sem código",
|
||||
"und": "Não identificável",
|
||||
"urd": "Urdu",
|
||||
"uzb": "Usbeque",
|
||||
"vai": "Vai",
|
||||
"ven": "Venda",
|
||||
"vie": "Vietnamita",
|
||||
"vol": "Volapük",
|
||||
"vot": "Votic",
|
||||
"wln": "Walloon",
|
||||
"war": "Waray (Philippines)",
|
||||
"was": "Washo",
|
||||
"cym": "Galês",
|
||||
"wal": "Wolaytta",
|
||||
"wol": "Uolofe",
|
||||
"xho": "Xosa",
|
||||
"sah": "Iacuto",
|
||||
"yao": "Iao",
|
||||
"yap": "Yapese",
|
||||
"yid": "Ídiche",
|
||||
"yor": "Iorubá",
|
||||
"zap": "Zapoteca",
|
||||
"zza": "Zaza",
|
||||
"zen": "Zenaga",
|
||||
"zha": "Zhuang",
|
||||
"zul": "Zulu",
|
||||
"zun": "Zuni"
|
||||
},
|
||||
"ru": {
|
||||
"aar": "Афар",
|
||||
"abk": "Абхазский",
|
||||
|
|
|
@ -31,7 +31,7 @@ from babel.dates import format_date
|
|||
from flask import Blueprint, request, url_for
|
||||
from flask_babel import get_locale
|
||||
from flask_login import current_user
|
||||
|
||||
from markupsafe import escape
|
||||
from . import logger
|
||||
|
||||
|
||||
|
@ -82,7 +82,7 @@ def formatdate_filter(val):
|
|||
except AttributeError as e:
|
||||
log.error('Babel error: %s, Current user locale: %s, Current User: %s', e,
|
||||
current_user.locale,
|
||||
current_user.nickname
|
||||
current_user.name
|
||||
)
|
||||
return val
|
||||
|
||||
|
@ -113,21 +113,25 @@ def yesno(value, yes, no):
|
|||
|
||||
@jinjia.app_template_filter('formatfloat')
|
||||
def formatfloat(value, decimals=1):
|
||||
formatedstring = '%d' % value
|
||||
if (value % 1) != 0:
|
||||
formatedstring = ('%s.%d' % (formatedstring, (value % 1) * 10**decimals)).rstrip('0')
|
||||
return formatedstring
|
||||
value = 0 if not value else value
|
||||
return ('{0:.' + str(decimals) + 'f}').format(value).rstrip('0').rstrip('.')
|
||||
|
||||
|
||||
@jinjia.app_template_filter('formatseriesindex')
|
||||
def formatseriesindex_filter(series_index):
|
||||
if series_index:
|
||||
if int(series_index) - series_index == 0:
|
||||
return int(series_index)
|
||||
else:
|
||||
try:
|
||||
if int(series_index) - series_index == 0:
|
||||
return int(series_index)
|
||||
else:
|
||||
return series_index
|
||||
except ValueError:
|
||||
return series_index
|
||||
return 0
|
||||
|
||||
@jinjia.app_template_filter('escapedlink')
|
||||
def escapedlink_filter(url, text):
|
||||
return "<a href='{}'>{}</a>".format(url, escape(text))
|
||||
|
||||
@jinjia.app_template_filter('uuidfilter')
|
||||
def uuidfilter(var):
|
||||
|
|
205
cps/kobo.py
205
cps/kobo.py
|
@ -42,11 +42,13 @@ from flask import (
|
|||
from flask_login import current_user
|
||||
from werkzeug.datastructures import Headers
|
||||
from sqlalchemy import func
|
||||
from sqlalchemy.sql.expression import and_
|
||||
from sqlalchemy.sql.expression import and_, or_
|
||||
from sqlalchemy.exc import StatementError
|
||||
from sqlalchemy.sql import select
|
||||
import requests
|
||||
|
||||
from . import config, logger, kobo_auth, db, calibre_db, helper, shelf as shelf_lib, ub
|
||||
from .constants import sqlalchemy_version2
|
||||
from .helper import get_download_link
|
||||
from .services import SyncToken as SyncToken
|
||||
from .web import download_required
|
||||
|
@ -81,6 +83,7 @@ CONNECTION_SPECIFIC_HEADERS = [
|
|||
"transfer-encoding",
|
||||
]
|
||||
|
||||
|
||||
def get_kobo_activated():
|
||||
return config.config_kobo_sync
|
||||
|
||||
|
@ -135,6 +138,7 @@ def convert_to_kobo_timestamp_string(timestamp):
|
|||
def HandleSyncRequest():
|
||||
sync_token = SyncToken.SyncToken.from_headers(request.headers)
|
||||
log.info("Kobo library sync request received.")
|
||||
log.debug("SyncToken: {}".format(sync_token))
|
||||
if not current_app.wsgi_app.is_proxied:
|
||||
log.debug('Kobo: Received unproxied request, changed request port to external server port')
|
||||
|
||||
|
@ -151,33 +155,60 @@ def HandleSyncRequest():
|
|||
# in case of external changes (e.g: adding a book through Calibre).
|
||||
calibre_db.reconnect_db(config, ub.app_DB_path)
|
||||
|
||||
if sync_token.books_last_id > -1:
|
||||
changed_entries = (
|
||||
calibre_db.session.query(db.Books, ub.ArchivedBook.last_modified, ub.ArchivedBook.is_archived)
|
||||
.join(db.Data).outerjoin(ub.ArchivedBook, db.Books.id == ub.ArchivedBook.book_id)
|
||||
.filter(db.Books.last_modified >= sync_token.books_last_modified)
|
||||
.filter(db.Books.id>sync_token.books_last_id)
|
||||
.filter(db.Data.format.in_(KOBO_FORMATS))
|
||||
.order_by(db.Books.last_modified)
|
||||
.order_by(db.Books.id)
|
||||
.limit(SYNC_ITEM_LIMIT)
|
||||
only_kobo_shelves = current_user.kobo_only_shelves_sync
|
||||
|
||||
if only_kobo_shelves:
|
||||
if sqlalchemy_version2:
|
||||
changed_entries = select(db.Books,
|
||||
ub.ArchivedBook.last_modified,
|
||||
ub.BookShelf.date_added,
|
||||
ub.ArchivedBook.is_archived)
|
||||
else:
|
||||
changed_entries = calibre_db.session.query(db.Books,
|
||||
ub.ArchivedBook.last_modified,
|
||||
ub.BookShelf.date_added,
|
||||
ub.ArchivedBook.is_archived)
|
||||
changed_entries = (changed_entries
|
||||
.join(db.Data).outerjoin(ub.ArchivedBook, db.Books.id == ub.ArchivedBook.book_id)
|
||||
.filter(or_(db.Books.last_modified > sync_token.books_last_modified,
|
||||
ub.BookShelf.date_added > sync_token.books_last_modified))
|
||||
.filter(db.Data.format.in_(KOBO_FORMATS)).filter(calibre_db.common_filters())
|
||||
.order_by(db.Books.id)
|
||||
.order_by(ub.ArchivedBook.last_modified)
|
||||
.join(ub.BookShelf, db.Books.id == ub.BookShelf.book_id)
|
||||
.join(ub.Shelf)
|
||||
.filter(ub.Shelf.user_id == current_user.id)
|
||||
.filter(ub.Shelf.kobo_sync)
|
||||
.distinct()
|
||||
)
|
||||
else:
|
||||
changed_entries = (
|
||||
calibre_db.session.query(db.Books, ub.ArchivedBook.last_modified, ub.ArchivedBook.is_archived)
|
||||
.join(db.Data).outerjoin(ub.ArchivedBook, db.Books.id == ub.ArchivedBook.book_id)
|
||||
.filter(db.Books.last_modified > sync_token.books_last_modified)
|
||||
.filter(db.Data.format.in_(KOBO_FORMATS))
|
||||
.order_by(db.Books.last_modified)
|
||||
.order_by(db.Books.id)
|
||||
.limit(SYNC_ITEM_LIMIT)
|
||||
if sqlalchemy_version2:
|
||||
changed_entries = select(db.Books, ub.ArchivedBook.last_modified, ub.ArchivedBook.is_archived)
|
||||
else:
|
||||
changed_entries = calibre_db.session.query(db.Books,
|
||||
ub.ArchivedBook.last_modified,
|
||||
ub.ArchivedBook.is_archived)
|
||||
changed_entries = (changed_entries
|
||||
.join(db.Data).outerjoin(ub.ArchivedBook, db.Books.id == ub.ArchivedBook.book_id)
|
||||
.filter(db.Books.last_modified > sync_token.books_last_modified)
|
||||
.filter(calibre_db.common_filters())
|
||||
.filter(db.Data.format.in_(KOBO_FORMATS))
|
||||
.order_by(db.Books.last_modified)
|
||||
.order_by(db.Books.id)
|
||||
)
|
||||
|
||||
if sync_token.books_last_id > -1:
|
||||
changed_entries = changed_entries.filter(db.Books.id > sync_token.books_last_id)
|
||||
|
||||
reading_states_in_new_entitlements = []
|
||||
for book in changed_entries:
|
||||
if sqlalchemy_version2:
|
||||
books = calibre_db.session.execute(changed_entries.limit(SYNC_ITEM_LIMIT))
|
||||
else:
|
||||
books = changed_entries.limit(SYNC_ITEM_LIMIT)
|
||||
for book in books:
|
||||
formats = [data.format for data in book.Books.data]
|
||||
if not 'KEPUB' in formats and config.config_kepubifypath and 'EPUB' in formats:
|
||||
helper.convert_book_format(book.Books.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.nickname)
|
||||
helper.convert_book_format(book.Books.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.name)
|
||||
|
||||
kobo_reading_state = get_or_create_reading_state(book.Books.id)
|
||||
entitlement = {
|
||||
|
@ -190,7 +221,14 @@ def HandleSyncRequest():
|
|||
new_reading_state_last_modified = max(new_reading_state_last_modified, kobo_reading_state.last_modified)
|
||||
reading_states_in_new_entitlements.append(book.Books.id)
|
||||
|
||||
if book.Books.timestamp > sync_token.books_last_created:
|
||||
ts_created = book.Books.timestamp
|
||||
|
||||
try:
|
||||
ts_created = max(ts_created, book.date_added)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
if ts_created > sync_token.books_last_created:
|
||||
sync_results.append({"NewEntitlement": entitlement})
|
||||
else:
|
||||
sync_results.append({"ChangedEntitlement": entitlement})
|
||||
|
@ -198,35 +236,59 @@ def HandleSyncRequest():
|
|||
new_books_last_modified = max(
|
||||
book.Books.last_modified, new_books_last_modified
|
||||
)
|
||||
new_books_last_created = max(book.Books.timestamp, new_books_last_created)
|
||||
try:
|
||||
new_books_last_modified = max(
|
||||
new_books_last_modified, book.date_added
|
||||
)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
max_change = (changed_entries
|
||||
.from_self()
|
||||
.filter(ub.ArchivedBook.is_archived)
|
||||
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc())
|
||||
.first()
|
||||
)
|
||||
if max_change:
|
||||
max_change = max_change.last_modified
|
||||
new_books_last_created = max(ts_created, new_books_last_created)
|
||||
|
||||
if sqlalchemy_version2:
|
||||
max_change = calibre_db.session.execute(changed_entries
|
||||
.filter(ub.ArchivedBook.is_archived)
|
||||
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()))\
|
||||
.columns(db.Books).first()
|
||||
else:
|
||||
max_change = new_archived_last_modified
|
||||
max_change = changed_entries.from_self().filter(ub.ArchivedBook.is_archived) \
|
||||
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()).first()
|
||||
|
||||
max_change = max_change.last_modified if max_change else new_archived_last_modified
|
||||
|
||||
new_archived_last_modified = max(new_archived_last_modified, max_change)
|
||||
|
||||
# no. of books returned
|
||||
book_count = changed_entries.count()
|
||||
|
||||
# last entry:
|
||||
if book_count:
|
||||
books_last_id = changed_entries.all()[-1].Books.id or -1
|
||||
if sqlalchemy_version2:
|
||||
entries = calibre_db.session.execute(changed_entries).all()
|
||||
book_count = len(entries)
|
||||
else:
|
||||
books_last_id = -1
|
||||
entries = changed_entries.all()
|
||||
book_count = changed_entries.count()
|
||||
# last entry:
|
||||
books_last_id = entries[-1].Books.id or -1 if book_count else -1
|
||||
|
||||
# generate reading state data
|
||||
changed_reading_states = (
|
||||
ub.session.query(ub.KoboReadingState)
|
||||
.filter(and_(func.datetime(ub.KoboReadingState.last_modified) > sync_token.reading_state_last_modified,
|
||||
ub.KoboReadingState.user_id == current_user.id,
|
||||
ub.KoboReadingState.book_id.notin_(reading_states_in_new_entitlements))))
|
||||
changed_reading_states = ub.session.query(ub.KoboReadingState)
|
||||
|
||||
if only_kobo_shelves:
|
||||
changed_reading_states = changed_reading_states.join(ub.BookShelf,
|
||||
ub.KoboReadingState.book_id == ub.BookShelf.book_id)\
|
||||
.join(ub.Shelf)\
|
||||
.filter(current_user.id == ub.Shelf.user_id)\
|
||||
.filter(ub.Shelf.kobo_sync,
|
||||
or_(
|
||||
func.datetime(ub.KoboReadingState.last_modified) > sync_token.reading_state_last_modified,
|
||||
func.datetime(ub.BookShelf.date_added) > sync_token.books_last_modified
|
||||
)).distinct()
|
||||
else:
|
||||
changed_reading_states = changed_reading_states.filter(
|
||||
func.datetime(ub.KoboReadingState.last_modified) > sync_token.reading_state_last_modified)
|
||||
|
||||
changed_reading_states = changed_reading_states.filter(
|
||||
and_(ub.KoboReadingState.user_id == current_user.id,
|
||||
ub.KoboReadingState.book_id.notin_(reading_states_in_new_entitlements)))
|
||||
|
||||
for kobo_reading_state in changed_reading_states.all():
|
||||
book = calibre_db.session.query(db.Books).filter(db.Books.id == kobo_reading_state.book_id).one_or_none()
|
||||
if book:
|
||||
|
@ -237,7 +299,7 @@ def HandleSyncRequest():
|
|||
})
|
||||
new_reading_state_last_modified = max(new_reading_state_last_modified, kobo_reading_state.last_modified)
|
||||
|
||||
sync_shelves(sync_token, sync_results)
|
||||
sync_shelves(sync_token, sync_results, only_kobo_shelves)
|
||||
|
||||
sync_token.books_last_created = new_books_last_created
|
||||
sync_token.books_last_modified = new_books_last_modified
|
||||
|
@ -262,12 +324,13 @@ def generate_sync_response(sync_token, sync_results, set_cont=False):
|
|||
extra_headers["x-kobo-sync-mode"] = store_response.headers.get("x-kobo-sync-mode")
|
||||
extra_headers["x-kobo-recent-reads"] = store_response.headers.get("x-kobo-recent-reads")
|
||||
|
||||
except Exception as e:
|
||||
log.error("Failed to receive or parse response from Kobo's sync endpoint: " + str(e))
|
||||
except Exception as ex:
|
||||
log.error("Failed to receive or parse response from Kobo's sync endpoint: {}".format(ex))
|
||||
if set_cont:
|
||||
extra_headers["x-kobo-sync"] = "continue"
|
||||
sync_token.to_headers(extra_headers)
|
||||
|
||||
log.debug("Kobo Sync Content: {}".format(sync_results))
|
||||
response = make_response(jsonify(sync_results), extra_headers)
|
||||
|
||||
return response
|
||||
|
@ -305,7 +368,8 @@ def get_download_url_for_book(book, book_format):
|
|||
book_format=book_format.lower()
|
||||
)
|
||||
return url_for(
|
||||
"web.download_link",
|
||||
"kobo.download_book",
|
||||
auth_token=kobo_auth.get_auth_token(),
|
||||
book_id=book.id,
|
||||
book_format=book_format.lower(),
|
||||
_external=True,
|
||||
|
@ -391,7 +455,7 @@ def get_metadata(book):
|
|||
|
||||
book_uuid = book.uuid
|
||||
metadata = {
|
||||
"Categories": ["00000000-0000-0000-0000-000000000001",],
|
||||
"Categories": ["00000000-0000-0000-0000-000000000001", ],
|
||||
# "Contributors": get_author(book),
|
||||
"CoverImageId": book_uuid,
|
||||
"CrossRevisionId": book_uuid,
|
||||
|
@ -598,13 +662,14 @@ def HandleTagRemoveItem(tag_id):
|
|||
|
||||
# Add new, changed, or deleted shelves to the sync_results.
|
||||
# Note: Public shelves that aren't owned by the user aren't supported.
|
||||
def sync_shelves(sync_token, sync_results):
|
||||
def sync_shelves(sync_token, sync_results, only_kobo_shelves=False):
|
||||
new_tags_last_modified = sync_token.tags_last_modified
|
||||
|
||||
for shelf in ub.session.query(ub.ShelfArchive).filter(func.datetime(ub.ShelfArchive.last_modified) > sync_token.tags_last_modified,
|
||||
ub.ShelfArchive.user_id == current_user.id):
|
||||
for shelf in ub.session.query(ub.ShelfArchive).filter(
|
||||
func.datetime(ub.ShelfArchive.last_modified) > sync_token.tags_last_modified,
|
||||
ub.ShelfArchive.user_id == current_user.id
|
||||
):
|
||||
new_tags_last_modified = max(shelf.last_modified, new_tags_last_modified)
|
||||
|
||||
sync_results.append({
|
||||
"DeletedTag": {
|
||||
"Tag": {
|
||||
|
@ -614,8 +679,40 @@ def sync_shelves(sync_token, sync_results):
|
|||
}
|
||||
})
|
||||
|
||||
for shelf in ub.session.query(ub.Shelf).filter(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
|
||||
ub.Shelf.user_id == current_user.id):
|
||||
extra_filters = []
|
||||
if only_kobo_shelves:
|
||||
for shelf in ub.session.query(ub.Shelf).filter(
|
||||
func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
|
||||
ub.Shelf.user_id == current_user.id,
|
||||
not ub.Shelf.kobo_sync
|
||||
):
|
||||
sync_results.append({
|
||||
"DeletedTag": {
|
||||
"Tag": {
|
||||
"Id": shelf.uuid,
|
||||
"LastModified": convert_to_kobo_timestamp_string(shelf.last_modified)
|
||||
}
|
||||
}
|
||||
})
|
||||
extra_filters.append(ub.Shelf.kobo_sync)
|
||||
|
||||
if sqlalchemy_version2:
|
||||
shelflist = ub.session.execute(select(ub.Shelf).outerjoin(ub.BookShelf).filter(
|
||||
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
|
||||
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
|
||||
ub.Shelf.user_id == current_user.id,
|
||||
*extra_filters
|
||||
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())).columns(ub.Shelf)
|
||||
else:
|
||||
shelflist = ub.session.query(ub.Shelf).outerjoin(ub.BookShelf).filter(
|
||||
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
|
||||
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
|
||||
ub.Shelf.user_id == current_user.id,
|
||||
*extra_filters
|
||||
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())
|
||||
|
||||
|
||||
for shelf in shelflist:
|
||||
if not shelf_lib.check_shelf_view_permissions(shelf):
|
||||
continue
|
||||
|
||||
|
|
|
@ -155,7 +155,7 @@ def generate_auth_token(user_id):
|
|||
for book in books:
|
||||
formats = [data.format for data in book.data]
|
||||
if not 'KEPUB' in formats and config.config_kepubifypath and 'EPUB' in formats:
|
||||
helper.convert_book_format(book.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.nickname)
|
||||
helper.convert_book_format(book.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.name)
|
||||
|
||||
return render_title_template(
|
||||
"generate_kobo_auth_url.html",
|
||||
|
|
|
@ -62,11 +62,11 @@ class _Logger(logging.Logger):
|
|||
|
||||
|
||||
def debug_no_auth(self, message, *args, **kwargs):
|
||||
message = message.strip("\r\n")
|
||||
if message.startswith("send: AUTH"):
|
||||
self.debug(message[:16], stacklevel=2, *args, **kwargs)
|
||||
self.debug(message[:16], *args, **kwargs)
|
||||
else:
|
||||
self.debug(message, stacklevel=2, *args, **kwargs)
|
||||
|
||||
self.debug(message, *args, **kwargs)
|
||||
|
||||
|
||||
def get(name=None):
|
||||
|
@ -153,11 +153,11 @@ def setup(log_file, log_level=None):
|
|||
file_handler.baseFilename = log_file
|
||||
else:
|
||||
try:
|
||||
file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2, encoding='utf-8')
|
||||
file_handler = RotatingFileHandler(log_file, maxBytes=100000, backupCount=2, encoding='utf-8')
|
||||
except IOError:
|
||||
if log_file == DEFAULT_LOG_FILE:
|
||||
raise
|
||||
file_handler = RotatingFileHandler(DEFAULT_LOG_FILE, maxBytes=50000, backupCount=2, encoding='utf-8')
|
||||
file_handler = RotatingFileHandler(DEFAULT_LOG_FILE, maxBytes=100000, backupCount=2, encoding='utf-8')
|
||||
log_file = ""
|
||||
file_handler.setFormatter(FORMATTER)
|
||||
|
||||
|
|
65
cps/metadata_provider/comicvine.py
Normal file
65
cps/metadata_provider/comicvine.py
Normal file
|
@ -0,0 +1,65 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2021 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
# ComicVine api document: https://comicvine.gamespot.com/api/documentation
|
||||
|
||||
import requests
|
||||
from cps.services.Metadata import Metadata
|
||||
|
||||
|
||||
class ComicVine(Metadata):
|
||||
__name__ = "ComicVine"
|
||||
__id__ = "comicvine"
|
||||
|
||||
def search(self, query, __):
|
||||
val = list()
|
||||
apikey = "57558043c53943d5d1e96a9ad425b0eb85532ee6"
|
||||
if self.active:
|
||||
headers = {
|
||||
'User-Agent': 'Not Evil Browser'
|
||||
}
|
||||
|
||||
result = requests.get("https://comicvine.gamespot.com/api/search?api_key="
|
||||
+ apikey + "&resources=issue&query=" + query + "&sort=name:desc&format=json", headers=headers)
|
||||
for r in result.json()['results']:
|
||||
seriesTitle = r['volume'].get('name', "")
|
||||
if r.get('store_date'):
|
||||
dateFomers = r.get('store_date')
|
||||
else:
|
||||
dateFomers = r.get('date_added')
|
||||
v = dict()
|
||||
v['id'] = r['id']
|
||||
v['title'] = seriesTitle + " #" + r.get('issue_number', "0") + " - " + ( r.get('name', "") or "")
|
||||
v['authors'] = r.get('authors', [])
|
||||
v['description'] = r.get('description', "")
|
||||
v['publisher'] = ""
|
||||
v['publishedDate'] = dateFomers
|
||||
v['tags'] = ["Comics", seriesTitle]
|
||||
v['rating'] = 0
|
||||
v['series'] = seriesTitle
|
||||
v['cover'] = r['image'].get('original_url')
|
||||
v['source'] = {
|
||||
"id": self.__id__,
|
||||
"description": "ComicVine Books",
|
||||
"link": "https://comicvine.gamespot.com/"
|
||||
}
|
||||
v['url'] = r.get('site_detail_url', "")
|
||||
val.append(v)
|
||||
return val
|
||||
|
||||
|
55
cps/metadata_provider/google.py
Normal file
55
cps/metadata_provider/google.py
Normal file
|
@ -0,0 +1,55 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2021 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
# Google Books api document: https://developers.google.com/books/docs/v1/using
|
||||
|
||||
|
||||
import requests
|
||||
from cps.services.Metadata import Metadata
|
||||
|
||||
class Google(Metadata):
|
||||
__name__ = "Google"
|
||||
__id__ = "google"
|
||||
|
||||
def search(self, query, __):
|
||||
if self.active:
|
||||
val = list()
|
||||
result = requests.get("https://www.googleapis.com/books/v1/volumes?q="+query.replace(" ","+"))
|
||||
for r in result.json()['items']:
|
||||
v = dict()
|
||||
v['id'] = r['id']
|
||||
v['title'] = r['volumeInfo']['title']
|
||||
v['authors'] = r['volumeInfo'].get('authors', [])
|
||||
v['description'] = r['volumeInfo'].get('description', "")
|
||||
v['publisher'] = r['volumeInfo'].get('publisher', "")
|
||||
v['publishedDate'] = r['volumeInfo'].get('publishedDate', "")
|
||||
v['tags'] = r['volumeInfo'].get('categories', [])
|
||||
v['rating'] = r['volumeInfo'].get('averageRating', 0)
|
||||
if r['volumeInfo'].get('imageLinks'):
|
||||
v['cover'] = r['volumeInfo']['imageLinks']['thumbnail'].replace("http://", "https://")
|
||||
else:
|
||||
v['cover'] = "/../../../static/generic_cover.jpg"
|
||||
v['source'] = {
|
||||
"id": self.__id__,
|
||||
"description": "Google Books",
|
||||
"link": "https://books.google.com/"}
|
||||
v['url'] = "https://books.google.com/books?id=" + r['id']
|
||||
val.append(v)
|
||||
return val
|
||||
|
||||
|
61
cps/metadata_provider/scholar.py
Normal file
61
cps/metadata_provider/scholar.py
Normal file
|
@ -0,0 +1,61 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2021 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from scholarly import scholarly
|
||||
|
||||
from cps.services.Metadata import Metadata
|
||||
|
||||
|
||||
class scholar(Metadata):
|
||||
__name__ = "Google Scholar"
|
||||
__id__ = "googlescholar"
|
||||
|
||||
def search(self, query, generic_cover=""):
|
||||
val = list()
|
||||
if self.active:
|
||||
scholar_gen = scholarly.search_pubs(' '.join(query.split('+')))
|
||||
i = 0
|
||||
for publication in scholar_gen:
|
||||
v = dict()
|
||||
v['id'] = "1234" # publication['bib'].get('title')
|
||||
v['title'] = publication['bib'].get('title')
|
||||
v['authors'] = publication['bib'].get('author', [])
|
||||
v['description'] = publication['bib'].get('abstract', "")
|
||||
v['publisher'] = publication['bib'].get('venue', "")
|
||||
if publication['bib'].get('pub_year'):
|
||||
v['publishedDate'] = publication['bib'].get('pub_year')+"-01-01"
|
||||
else:
|
||||
v['publishedDate'] = ""
|
||||
v['tags'] = ""
|
||||
v['ratings'] = 0
|
||||
v['series'] = ""
|
||||
v['cover'] = generic_cover
|
||||
v['url'] = publication.get('pub_url') or publication.get('eprint_url') or "",
|
||||
v['source'] = {
|
||||
"id": self.__id__,
|
||||
"description": "Google Scholar",
|
||||
"link": "https://scholar.google.com/"
|
||||
}
|
||||
val.append(v)
|
||||
i += 1
|
||||
if (i >= 10):
|
||||
break
|
||||
return val
|
||||
|
||||
|
||||
|
106
cps/oauth_bb.py
106
cps/oauth_bb.py
|
@ -30,6 +30,7 @@ from flask_babel import gettext as _
|
|||
from flask_dance.consumer import oauth_authorized, oauth_error
|
||||
from flask_dance.contrib.github import make_github_blueprint, github
|
||||
from flask_dance.contrib.google import make_google_blueprint, google
|
||||
from oauthlib.oauth2 import TokenExpiredError, InvalidGrantError
|
||||
from flask_login import login_user, current_user, login_required
|
||||
from sqlalchemy.orm.exc import NoResultFound
|
||||
|
||||
|
@ -42,6 +43,7 @@ except NameError:
|
|||
|
||||
|
||||
oauth_check = {}
|
||||
oauthblueprints = []
|
||||
oauth = Blueprint('oauth', __name__)
|
||||
log = logger.create()
|
||||
|
||||
|
@ -87,7 +89,7 @@ def register_user_with_oauth(user=None):
|
|||
except NoResultFound:
|
||||
# no found, return error
|
||||
return
|
||||
ub.session_commit("User {} with OAuth for provider {} registered".format(user.nickname, oauth_key))
|
||||
ub.session_commit("User {} with OAuth for provider {} registered".format(user.name, oauth_key))
|
||||
|
||||
|
||||
def logout_oauth_user():
|
||||
|
@ -133,8 +135,8 @@ def bind_oauth_or_register(provider_id, provider_user_id, redirect_url, provider
|
|||
# already bind with user, just login
|
||||
if oauth_entry.user:
|
||||
login_user(oauth_entry.user)
|
||||
log.debug(u"You are now logged in as: '%s'", oauth_entry.user.nickname)
|
||||
flash(_(u"you are now logged in as: '%(nickname)s'", nickname= oauth_entry.user.nickname),
|
||||
log.debug(u"You are now logged in as: '%s'", oauth_entry.user.name)
|
||||
flash(_(u"you are now logged in as: '%(nickname)s'", nickname= oauth_entry.user.name),
|
||||
category="success")
|
||||
return redirect(url_for('web.index'))
|
||||
else:
|
||||
|
@ -145,9 +147,10 @@ def bind_oauth_or_register(provider_id, provider_user_id, redirect_url, provider
|
|||
ub.session.add(oauth_entry)
|
||||
ub.session.commit()
|
||||
flash(_(u"Link to %(oauth)s Succeeded", oauth=provider_name), category="success")
|
||||
log.info("Link to {} Succeeded".format(provider_name))
|
||||
return redirect(url_for('web.profile'))
|
||||
except Exception as e:
|
||||
log.debug_or_exception(e)
|
||||
except Exception as ex:
|
||||
log.debug_or_exception(ex)
|
||||
ub.session.rollback()
|
||||
else:
|
||||
flash(_(u"Login failed, No User Linked With OAuth Account"), category="error")
|
||||
|
@ -193,8 +196,9 @@ def unlink_oauth(provider):
|
|||
ub.session.commit()
|
||||
logout_oauth_user()
|
||||
flash(_(u"Unlink to %(oauth)s Succeeded", oauth=oauth_check[provider]), category="success")
|
||||
except Exception as e:
|
||||
log.debug_or_exception(e)
|
||||
log.info("Unlink to {} Succeeded".format(oauth_check[provider]))
|
||||
except Exception as ex:
|
||||
log.debug_or_exception(ex)
|
||||
ub.session.rollback()
|
||||
flash(_(u"Unlink to %(oauth)s Failed", oauth=oauth_check[provider]), category="error")
|
||||
except NoResultFound:
|
||||
|
@ -203,7 +207,6 @@ def unlink_oauth(provider):
|
|||
return redirect(url_for('web.profile'))
|
||||
|
||||
def generate_oauth_blueprints():
|
||||
oauthblueprints = []
|
||||
if not ub.session.query(ub.OAuthProvider).count():
|
||||
for provider in ("github", "google"):
|
||||
oauthProvider = ub.OAuthProvider()
|
||||
|
@ -257,11 +260,13 @@ if ub.oauth_support:
|
|||
def github_logged_in(blueprint, token):
|
||||
if not token:
|
||||
flash(_(u"Failed to log in with GitHub."), category="error")
|
||||
log.error("Failed to log in with GitHub")
|
||||
return False
|
||||
|
||||
resp = blueprint.session.get("/user")
|
||||
if not resp.ok:
|
||||
flash(_(u"Failed to fetch user info from GitHub."), category="error")
|
||||
log.error("Failed to fetch user info from GitHub")
|
||||
return False
|
||||
|
||||
github_info = resp.json()
|
||||
|
@ -273,11 +278,13 @@ if ub.oauth_support:
|
|||
def google_logged_in(blueprint, token):
|
||||
if not token:
|
||||
flash(_(u"Failed to log in with Google."), category="error")
|
||||
log.error("Failed to log in with Google")
|
||||
return False
|
||||
|
||||
resp = blueprint.session.get("/oauth2/v2/userinfo")
|
||||
if not resp.ok:
|
||||
flash(_(u"Failed to fetch user info from Google."), category="error")
|
||||
log.error("Failed to fetch user info from Google")
|
||||
return False
|
||||
|
||||
google_info = resp.json()
|
||||
|
@ -299,39 +306,6 @@ if ub.oauth_support:
|
|||
) # ToDo: Translate
|
||||
flash(msg, category="error")
|
||||
|
||||
|
||||
@oauth.route('/link/github')
|
||||
@oauth_required
|
||||
def github_login():
|
||||
if not github.authorized:
|
||||
return redirect(url_for('github.login'))
|
||||
account_info = github.get('/user')
|
||||
if account_info.ok:
|
||||
account_info_json = account_info.json()
|
||||
return bind_oauth_or_register(oauthblueprints[0]['id'], account_info_json['id'], 'github.login', 'github')
|
||||
flash(_(u"GitHub Oauth error, please retry later."), category="error")
|
||||
return redirect(url_for('web.login'))
|
||||
|
||||
|
||||
@oauth.route('/unlink/github', methods=["GET"])
|
||||
@login_required
|
||||
def github_login_unlink():
|
||||
return unlink_oauth(oauthblueprints[0]['id'])
|
||||
|
||||
|
||||
@oauth.route('/link/google')
|
||||
@oauth_required
|
||||
def google_login():
|
||||
if not google.authorized:
|
||||
return redirect(url_for("google.login"))
|
||||
resp = google.get("/oauth2/v2/userinfo")
|
||||
if resp.ok:
|
||||
account_info_json = resp.json()
|
||||
return bind_oauth_or_register(oauthblueprints[1]['id'], account_info_json['id'], 'google.login', 'google')
|
||||
flash(_(u"Google Oauth error, please retry later."), category="error")
|
||||
return redirect(url_for('web.login'))
|
||||
|
||||
|
||||
@oauth_error.connect_via(oauthblueprints[1]['blueprint'])
|
||||
def google_error(blueprint, error, error_description=None, error_uri=None):
|
||||
msg = (
|
||||
|
@ -346,7 +320,49 @@ if ub.oauth_support:
|
|||
flash(msg, category="error")
|
||||
|
||||
|
||||
@oauth.route('/unlink/google', methods=["GET"])
|
||||
@login_required
|
||||
def google_login_unlink():
|
||||
return unlink_oauth(oauthblueprints[1]['id'])
|
||||
@oauth.route('/link/github')
|
||||
@oauth_required
|
||||
def github_login():
|
||||
if not github.authorized:
|
||||
return redirect(url_for('github.login'))
|
||||
try:
|
||||
account_info = github.get('/user')
|
||||
if account_info.ok:
|
||||
account_info_json = account_info.json()
|
||||
return bind_oauth_or_register(oauthblueprints[0]['id'], account_info_json['id'], 'github.login', 'github')
|
||||
flash(_(u"GitHub Oauth error, please retry later."), category="error")
|
||||
log.error("GitHub Oauth error, please retry later")
|
||||
except (InvalidGrantError, TokenExpiredError) as e:
|
||||
flash(_(u"GitHub Oauth error: {}").format(e), category="error")
|
||||
log.error(e)
|
||||
return redirect(url_for('web.login'))
|
||||
|
||||
|
||||
@oauth.route('/unlink/github', methods=["GET"])
|
||||
@login_required
|
||||
def github_login_unlink():
|
||||
return unlink_oauth(oauthblueprints[0]['id'])
|
||||
|
||||
|
||||
@oauth.route('/link/google')
|
||||
@oauth_required
|
||||
def google_login():
|
||||
if not google.authorized:
|
||||
return redirect(url_for("google.login"))
|
||||
try:
|
||||
resp = google.get("/oauth2/v2/userinfo")
|
||||
if resp.ok:
|
||||
account_info_json = resp.json()
|
||||
return bind_oauth_or_register(oauthblueprints[1]['id'], account_info_json['id'], 'google.login', 'google')
|
||||
flash(_(u"Google Oauth error, please retry later."), category="error")
|
||||
log.error("Google Oauth error, please retry later")
|
||||
except (InvalidGrantError, TokenExpiredError) as e:
|
||||
flash(_(u"Google Oauth error: {}").format(e), category="error")
|
||||
log.error(e)
|
||||
return redirect(url_for('web.login'))
|
||||
|
||||
|
||||
@oauth.route('/unlink/google', methods=["GET"])
|
||||
@login_required
|
||||
def google_login_unlink():
|
||||
return unlink_oauth(oauthblueprints[1]['id'])
|
||||
|
|
157
cps/opds.py
157
cps/opds.py
|
@ -27,7 +27,7 @@ from functools import wraps
|
|||
|
||||
from flask import Blueprint, request, render_template, Response, g, make_response, abort
|
||||
from flask_login import current_user
|
||||
from sqlalchemy.sql.expression import func, text, or_, and_
|
||||
from sqlalchemy.sql.expression import func, text, or_, and_, true
|
||||
from werkzeug.security import check_password_hash
|
||||
|
||||
from . import constants, logger, config, db, calibre_db, ub, services, get_locale, isoLanguages
|
||||
|
@ -94,7 +94,45 @@ def feed_cc_search(query):
|
|||
@opds.route("/opds/search", methods=["GET"])
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_normal_search():
|
||||
return feed_search(request.args.get("query").strip())
|
||||
return feed_search(request.args.get("query", "").strip())
|
||||
|
||||
|
||||
@opds.route("/opds/books")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_booksindex():
|
||||
shift = 0
|
||||
off = int(request.args.get("offset") or 0)
|
||||
entries = calibre_db.session.query(func.upper(func.substr(db.Books.sort, 1, 1)).label('id'))\
|
||||
.filter(calibre_db.common_filters()).group_by(func.upper(func.substr(db.Books.sort, 1, 1))).all()
|
||||
|
||||
elements = []
|
||||
if off == 0:
|
||||
elements.append({'id': "00", 'name':_("All")})
|
||||
shift = 1
|
||||
for entry in entries[
|
||||
off + shift - 1:
|
||||
int(off + int(config.config_books_per_page) - shift)]:
|
||||
elements.append({'id': entry.id, 'name': entry.id})
|
||||
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
len(entries) + 1)
|
||||
return render_xml_template('feed.xml',
|
||||
letterelements=elements,
|
||||
folder='opds.feed_letter_books',
|
||||
pagination=pagination)
|
||||
|
||||
|
||||
@opds.route("/opds/books/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_books(book_id):
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Books.sort).startswith(book_id)
|
||||
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||
db.Books,
|
||||
letter,
|
||||
[db.Books.sort])
|
||||
|
||||
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||
|
||||
|
||||
@opds.route("/opds/new")
|
||||
|
@ -150,14 +188,41 @@ def feed_hot():
|
|||
@opds.route("/opds/author")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_authorindex():
|
||||
off = request.args.get("offset") or 0
|
||||
entries = calibre_db.session.query(db.Authors).join(db.books_authors_link).join(db.Books)\
|
||||
.filter(calibre_db.common_filters())\
|
||||
.group_by(text('books_authors_link.author'))\
|
||||
.order_by(db.Authors.sort).limit(config.config_books_per_page)\
|
||||
.offset(off)
|
||||
shift = 0
|
||||
off = int(request.args.get("offset") or 0)
|
||||
entries = calibre_db.session.query(func.upper(func.substr(db.Authors.sort, 1, 1)).label('id'))\
|
||||
.join(db.books_authors_link).join(db.Books).filter(calibre_db.common_filters())\
|
||||
.group_by(func.upper(func.substr(db.Authors.sort, 1, 1))).all()
|
||||
|
||||
elements = []
|
||||
if off == 0:
|
||||
elements.append({'id': "00", 'name':_("All")})
|
||||
shift = 1
|
||||
for entry in entries[
|
||||
off + shift - 1:
|
||||
int(off + int(config.config_books_per_page) - shift)]:
|
||||
elements.append({'id': entry.id, 'name': entry.id})
|
||||
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
len(calibre_db.session.query(db.Authors).all()))
|
||||
len(entries) + 1)
|
||||
return render_xml_template('feed.xml',
|
||||
letterelements=elements,
|
||||
folder='opds.feed_letter_author',
|
||||
pagination=pagination)
|
||||
|
||||
|
||||
@opds.route("/opds/author/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_author(book_id):
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Authors.sort).startswith(book_id)
|
||||
entries = calibre_db.session.query(db.Authors).join(db.books_authors_link).join(db.Books)\
|
||||
.filter(calibre_db.common_filters()).filter(letter)\
|
||||
.group_by(text('books_authors_link.author'))\
|
||||
.order_by(db.Authors.sort)
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
entries.count())
|
||||
entries = entries.limit(config.config_books_per_page).offset(off).all()
|
||||
return render_xml_template('feed.xml', listelements=entries, folder='opds.feed_author', pagination=pagination)
|
||||
|
||||
|
||||
|
@ -201,17 +266,41 @@ def feed_publisher(book_id):
|
|||
@opds.route("/opds/category")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_categoryindex():
|
||||
shift = 0
|
||||
off = int(request.args.get("offset") or 0)
|
||||
entries = calibre_db.session.query(func.upper(func.substr(db.Tags.name, 1, 1)).label('id'))\
|
||||
.join(db.books_tags_link).join(db.Books).filter(calibre_db.common_filters())\
|
||||
.group_by(func.upper(func.substr(db.Tags.name, 1, 1))).all()
|
||||
elements = []
|
||||
if off == 0:
|
||||
elements.append({'id': "00", 'name':_("All")})
|
||||
shift = 1
|
||||
for entry in entries[
|
||||
off + shift - 1:
|
||||
int(off + int(config.config_books_per_page) - shift)]:
|
||||
elements.append({'id': entry.id, 'name': entry.id})
|
||||
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
len(entries) + 1)
|
||||
return render_xml_template('feed.xml',
|
||||
letterelements=elements,
|
||||
folder='opds.feed_letter_category',
|
||||
pagination=pagination)
|
||||
|
||||
@opds.route("/opds/category/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_category(book_id):
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Tags.name).startswith(book_id)
|
||||
entries = calibre_db.session.query(db.Tags)\
|
||||
.join(db.books_tags_link)\
|
||||
.join(db.Books)\
|
||||
.filter(calibre_db.common_filters())\
|
||||
.filter(calibre_db.common_filters()).filter(letter)\
|
||||
.group_by(text('books_tags_link.tag'))\
|
||||
.order_by(db.Tags.name)\
|
||||
.offset(off)\
|
||||
.limit(config.config_books_per_page)
|
||||
.order_by(db.Tags.name)
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
len(calibre_db.session.query(db.Tags).all()))
|
||||
entries.count())
|
||||
entries = entries.offset(off).limit(config.config_books_per_page).all()
|
||||
return render_xml_template('feed.xml', listelements=entries, folder='opds.feed_category', pagination=pagination)
|
||||
|
||||
|
||||
|
@ -229,16 +318,40 @@ def feed_category(book_id):
|
|||
@opds.route("/opds/series")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_seriesindex():
|
||||
shift = 0
|
||||
off = int(request.args.get("offset") or 0)
|
||||
entries = calibre_db.session.query(func.upper(func.substr(db.Series.sort, 1, 1)).label('id'))\
|
||||
.join(db.books_series_link).join(db.Books).filter(calibre_db.common_filters())\
|
||||
.group_by(func.upper(func.substr(db.Series.sort, 1, 1))).all()
|
||||
elements = []
|
||||
if off == 0:
|
||||
elements.append({'id': "00", 'name':_("All")})
|
||||
shift = 1
|
||||
for entry in entries[
|
||||
off + shift - 1:
|
||||
int(off + int(config.config_books_per_page) - shift)]:
|
||||
elements.append({'id': entry.id, 'name': entry.id})
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
len(entries) + 1)
|
||||
return render_xml_template('feed.xml',
|
||||
letterelements=elements,
|
||||
folder='opds.feed_letter_series',
|
||||
pagination=pagination)
|
||||
|
||||
@opds.route("/opds/series/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_series(book_id):
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Series.sort).startswith(book_id)
|
||||
entries = calibre_db.session.query(db.Series)\
|
||||
.join(db.books_series_link)\
|
||||
.join(db.Books)\
|
||||
.filter(calibre_db.common_filters())\
|
||||
.filter(calibre_db.common_filters()).filter(letter)\
|
||||
.group_by(text('books_series_link.series'))\
|
||||
.order_by(db.Series.sort)\
|
||||
.offset(off).all()
|
||||
.order_by(db.Series.sort)
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
len(calibre_db.session.query(db.Series).all()))
|
||||
entries.count())
|
||||
entries = entries.offset(off).limit(config.config_books_per_page).all()
|
||||
return render_xml_template('feed.xml', listelements=entries, folder='opds.feed_series', pagination=pagination)
|
||||
|
||||
|
||||
|
@ -269,7 +382,7 @@ def feed_ratingindex():
|
|||
len(entries))
|
||||
element = list()
|
||||
for entry in entries:
|
||||
element.append(FeedObject(entry[0].id, "{} Stars".format(entry.name)))
|
||||
element.append(FeedObject(entry[0].id, _("{} Stars").format(entry.name)))
|
||||
return render_xml_template('feed.xml', listelements=element, folder='opds.feed_ratings', pagination=pagination)
|
||||
|
||||
|
||||
|
@ -428,13 +541,13 @@ def check_auth(username, password):
|
|||
username = username.encode('windows-1252')
|
||||
except UnicodeEncodeError:
|
||||
username = username.encode('utf-8')
|
||||
user = ub.session.query(ub.User).filter(func.lower(ub.User.nickname) ==
|
||||
user = ub.session.query(ub.User).filter(func.lower(ub.User.name) ==
|
||||
username.decode('utf-8').lower()).first()
|
||||
if bool(user and check_password_hash(str(user.password), password)):
|
||||
return True
|
||||
else:
|
||||
ipAdress = request.headers.get('X-Forwarded-For', request.remote_addr)
|
||||
log.warning('OPDS Login failed for user "%s" IP-address: %s', username.decode('utf-8'), ipAdress)
|
||||
ip_Address = request.headers.get('X-Forwarded-For', request.remote_addr)
|
||||
log.warning('OPDS Login failed for user "%s" IP-address: %s', username.decode('utf-8'), ip_Address)
|
||||
return False
|
||||
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ def remote_login():
|
|||
ub.session_commit()
|
||||
verify_url = url_for('remotelogin.verify_token', token=auth_token.auth_token, _external=true)
|
||||
log.debug(u"Remot Login request with token: %s", auth_token.auth_token)
|
||||
return render_title_template('remote_login.html', title=_(u"login"), token=auth_token.auth_token,
|
||||
return render_title_template('remote_login.html', title=_(u"Login"), token=auth_token.auth_token,
|
||||
verify_url=verify_url, page="remotelogin")
|
||||
|
||||
|
||||
|
@ -126,11 +126,11 @@ def token_verified():
|
|||
login_user(user)
|
||||
|
||||
ub.session.delete(auth_token)
|
||||
ub.session_commit("User {} logged in via remotelogin, token deleted".format(user.nickname))
|
||||
ub.session_commit("User {} logged in via remotelogin, token deleted".format(user.name))
|
||||
|
||||
data['status'] = 'success'
|
||||
log.debug(u"Remote Login for userid %s succeded", user.id)
|
||||
flash(_(u"you are now logged in as: '%(nickname)s'", nickname=user.nickname), category="success")
|
||||
flash(_(u"you are now logged in as: '%(nickname)s'", nickname=user.name), category="success")
|
||||
|
||||
response = make_response(json.dumps(data, ensure_ascii=False))
|
||||
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||
|
|
|
@ -42,10 +42,16 @@ def get_sidebar_config(kwargs=None):
|
|||
sidebar.append({"glyph": "glyphicon-fire", "text": _('Hot Books'), "link": 'web.books_list', "id": "hot",
|
||||
"visibility": constants.SIDEBAR_HOT, 'public': True, "page": "hot",
|
||||
"show_text": _('Show Hot Books'), "config_show": True})
|
||||
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.books_list',
|
||||
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous),
|
||||
"page": "download", "show_text": _('Show Downloaded Books'),
|
||||
"config_show": content})
|
||||
if current_user.role_admin():
|
||||
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.download_list',
|
||||
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous),
|
||||
"page": "download", "show_text": _('Show Downloaded Books'),
|
||||
"config_show": content})
|
||||
else:
|
||||
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.books_list',
|
||||
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous),
|
||||
"page": "download", "show_text": _('Show Downloaded Books'),
|
||||
"config_show": content})
|
||||
sidebar.append(
|
||||
{"glyph": "glyphicon-star", "text": _('Top Rated Books'), "link": 'web.books_list', "id": "rated",
|
||||
"visibility": constants.SIDEBAR_BEST_RATED, 'public': True, "page": "rated",
|
||||
|
@ -59,7 +65,7 @@ def get_sidebar_config(kwargs=None):
|
|||
"show_text": _('Show unread'), "config_show": False})
|
||||
sidebar.append({"glyph": "glyphicon-random", "text": _('Discover'), "link": 'web.books_list', "id": "rand",
|
||||
"visibility": constants.SIDEBAR_RANDOM, 'public': True, "page": "discover",
|
||||
"show_text": _('Show random books'), "config_show": True})
|
||||
"show_text": _('Show Random Books'), "config_show": True})
|
||||
sidebar.append({"glyph": "glyphicon-inbox", "text": _('Categories'), "link": 'web.category_list', "id": "cat",
|
||||
"visibility": constants.SIDEBAR_CATEGORY, 'public': True, "page": "category",
|
||||
"show_text": _('Show category selection'), "config_show": True})
|
||||
|
|
118
cps/search_metadata.py
Normal file
118
cps/search_metadata.py
Normal file
|
@ -0,0 +1,118 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2021 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from __future__ import division, print_function, unicode_literals
|
||||
import os
|
||||
import json
|
||||
import importlib
|
||||
import sys
|
||||
import inspect
|
||||
import datetime
|
||||
import concurrent.futures
|
||||
|
||||
from flask import Blueprint, request, Response, url_for
|
||||
from flask_login import current_user
|
||||
from flask_login import login_required
|
||||
from sqlalchemy.orm.attributes import flag_modified
|
||||
from sqlalchemy.exc import OperationalError, InvalidRequestError
|
||||
|
||||
from . import constants, logger, ub
|
||||
from cps.services.Metadata import Metadata
|
||||
|
||||
|
||||
meta = Blueprint('metadata', __name__)
|
||||
|
||||
log = logger.create()
|
||||
|
||||
new_list = list()
|
||||
meta_dir = os.path.join(constants.BASE_DIR, "cps", "metadata_provider")
|
||||
modules = os.listdir(os.path.join(constants.BASE_DIR, "cps", "metadata_provider"))
|
||||
for f in modules:
|
||||
if os.path.isfile(os.path.join(meta_dir, f)) and not f.endswith('__init__.py'):
|
||||
a = os.path.basename(f)[:-3]
|
||||
try:
|
||||
importlib.import_module("cps.metadata_provider." + a)
|
||||
new_list.append(a)
|
||||
except ImportError:
|
||||
log.error("Import error for metadata source: {}".format(a))
|
||||
pass
|
||||
|
||||
def list_classes(provider_list):
|
||||
classes = list()
|
||||
for element in provider_list:
|
||||
for name, obj in inspect.getmembers(sys.modules["cps.metadata_provider." + element]):
|
||||
if inspect.isclass(obj) and name != "Metadata" and issubclass(obj, Metadata):
|
||||
classes.append(obj())
|
||||
return classes
|
||||
|
||||
cl = list_classes(new_list)
|
||||
|
||||
@meta.route("/metadata/provider")
|
||||
@login_required
|
||||
def metadata_provider():
|
||||
active = current_user.view_settings.get('metadata', {})
|
||||
provider = list()
|
||||
for c in cl:
|
||||
ac = active.get(c.__id__, True)
|
||||
provider.append({"name": c.__name__, "active": ac, "initial": ac, "id": c.__id__})
|
||||
return Response(json.dumps(provider), mimetype='application/json')
|
||||
|
||||
@meta.route("/metadata/provider", methods=['POST'])
|
||||
@meta.route("/metadata/provider/<prov_name>", methods=['POST'])
|
||||
@login_required
|
||||
def metadata_change_active_provider(prov_name):
|
||||
new_state = request.get_json()
|
||||
active = current_user.view_settings.get('metadata', {})
|
||||
active[new_state['id']] = new_state['value']
|
||||
current_user.view_settings['metadata'] = active
|
||||
try:
|
||||
try:
|
||||
flag_modified(current_user, "view_settings")
|
||||
except AttributeError:
|
||||
pass
|
||||
ub.session.commit()
|
||||
except (InvalidRequestError, OperationalError):
|
||||
log.error("Invalid request received: {}".format(request))
|
||||
return "Invalid request", 400
|
||||
if "initial" in new_state and prov_name:
|
||||
for c in cl:
|
||||
if c.__id__ == prov_name:
|
||||
data = c.search(new_state.get('query', ""))
|
||||
break
|
||||
return Response(json.dumps(data), mimetype='application/json')
|
||||
return ""
|
||||
|
||||
@meta.route("/metadata/search", methods=['POST'])
|
||||
@login_required
|
||||
def metadata_search():
|
||||
query = request.form.to_dict().get('query')
|
||||
data = list()
|
||||
active = current_user.view_settings.get('metadata', {})
|
||||
if query:
|
||||
static_cover = url_for('static', filename='generic_cover.jpg')
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
|
||||
meta = {executor.submit(c.search, query, static_cover): c for c in cl if active.get(c.__id__, True)}
|
||||
for future in concurrent.futures.as_completed(meta):
|
||||
data.extend(future.result())
|
||||
return Response(json.dumps(data), mimetype='application/json')
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
27
cps/services/Metadata.py
Normal file
27
cps/services/Metadata.py
Normal file
|
@ -0,0 +1,27 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2021 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
class Metadata():
|
||||
__name__ = "Generic"
|
||||
|
||||
def __init__(self):
|
||||
self.active = True
|
||||
|
||||
def set_status(self, state):
|
||||
self.active = state
|
|
@ -183,3 +183,12 @@ class SyncToken:
|
|||
},
|
||||
}
|
||||
return b64encode_json(token)
|
||||
|
||||
def __str__(self):
|
||||
return "{},{},{},{},{},{},{}".format(self.raw_kobo_store_token,
|
||||
self.books_last_created,
|
||||
self.books_last_modified,
|
||||
self.archive_last_modified,
|
||||
self.reading_state_last_modified,
|
||||
self.tags_last_modified,
|
||||
self.books_last_id)
|
||||
|
|
|
@ -45,3 +45,9 @@ except ImportError as err:
|
|||
log.debug("Cannot import SyncToken, syncing books with Kobo Devices will not work: %s", err)
|
||||
kobo = None
|
||||
SyncToken = None
|
||||
|
||||
try:
|
||||
from . import gmail
|
||||
except ImportError as err:
|
||||
log.debug("Cannot import gmail, sending books via Gmail Oauth2 Verification will not work: %s", err)
|
||||
gmail = None
|
||||
|
|
83
cps/services/gmail.py
Normal file
83
cps/services/gmail.py
Normal file
|
@ -0,0 +1,83 @@
|
|||
from __future__ import print_function
|
||||
import os.path
|
||||
from google_auth_oauthlib.flow import InstalledAppFlow
|
||||
from google.auth.transport.requests import Request
|
||||
from googleapiclient.discovery import build
|
||||
from google.oauth2.credentials import Credentials
|
||||
|
||||
from datetime import datetime
|
||||
import base64
|
||||
from flask_babel import gettext as _
|
||||
from ..constants import BASE_DIR
|
||||
from .. import logger
|
||||
|
||||
|
||||
log = logger.create()
|
||||
|
||||
SCOPES = ['openid', 'https://www.googleapis.com/auth/gmail.send', 'https://www.googleapis.com/auth/userinfo.email']
|
||||
|
||||
def setup_gmail(token):
|
||||
# If there are no (valid) credentials available, let the user log in.
|
||||
creds = None
|
||||
if "token" in token:
|
||||
creds = Credentials(
|
||||
token=token['token'],
|
||||
refresh_token=token['refresh_token'],
|
||||
token_uri=token['token_uri'],
|
||||
client_id=token['client_id'],
|
||||
client_secret=token['client_secret'],
|
||||
scopes=token['scopes'],
|
||||
)
|
||||
creds.expiry = datetime.fromisoformat(token['expiry'])
|
||||
|
||||
if not creds or not creds.valid:
|
||||
# don't forget to dump one more time after the refresh
|
||||
# also, some file-locking routines wouldn't be needless
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
creds.refresh(Request())
|
||||
else:
|
||||
cred_file = os.path.join(BASE_DIR, 'gmail.json')
|
||||
if not os.path.exists(cred_file):
|
||||
raise Exception(_("Found no valid gmail.json file with OAuth information"))
|
||||
flow = InstalledAppFlow.from_client_secrets_file(
|
||||
os.path.join(BASE_DIR, 'gmail.json'), SCOPES)
|
||||
creds = flow.run_local_server(port=0)
|
||||
user_info = get_user_info(creds)
|
||||
return {
|
||||
'token': creds.token,
|
||||
'refresh_token': creds.refresh_token,
|
||||
'token_uri': creds.token_uri,
|
||||
'client_id': creds.client_id,
|
||||
'client_secret': creds.client_secret,
|
||||
'scopes': creds.scopes,
|
||||
'expiry': creds.expiry.isoformat(),
|
||||
'email': user_info
|
||||
}
|
||||
return {}
|
||||
|
||||
def get_user_info(credentials):
|
||||
user_info_service = build(serviceName='oauth2', version='v2',credentials=credentials)
|
||||
user_info = user_info_service.userinfo().get().execute()
|
||||
return user_info.get('email', "")
|
||||
|
||||
def send_messsage(token, msg):
|
||||
log.debug("Start sending e-mail via Gmail")
|
||||
creds = Credentials(
|
||||
token=token['token'],
|
||||
refresh_token=token['refresh_token'],
|
||||
token_uri=token['token_uri'],
|
||||
client_id=token['client_id'],
|
||||
client_secret=token['client_secret'],
|
||||
scopes=token['scopes'],
|
||||
)
|
||||
creds.expiry = datetime.fromisoformat(token['expiry'])
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
creds.refresh(Request())
|
||||
service = build('gmail', 'v1', credentials=creds)
|
||||
message_as_bytes = msg.as_bytes() # the message should converted from string to bytes.
|
||||
message_as_base64 = base64.urlsafe_b64encode(message_as_bytes) # encode in base64 (printable letters coding)
|
||||
raw = message_as_base64.decode() # convert to something JSON serializable
|
||||
body = {'raw': raw}
|
||||
|
||||
(service.users().messages().send(userId='me', body=body).execute())
|
||||
log.debug("E-mail send successfully via Gmail")
|
|
@ -69,6 +69,7 @@ class WorkerThread(threading.Thread):
|
|||
def add(cls, user, task):
|
||||
ins = cls.getInstance()
|
||||
ins.num += 1
|
||||
log.debug("Add Task for user: {}: {}".format(user, task))
|
||||
ins.queue.put(QueuedTask(
|
||||
num=ins.num,
|
||||
user=user,
|
||||
|
@ -164,9 +165,9 @@ class CalibreTask:
|
|||
# catch any unhandled exceptions in a task and automatically fail it
|
||||
try:
|
||||
self.run(*args)
|
||||
except Exception as e:
|
||||
self._handleError(str(e))
|
||||
log.debug_or_exception(e)
|
||||
except Exception as ex:
|
||||
self._handleError(str(ex))
|
||||
log.debug_or_exception(ex)
|
||||
|
||||
self.end_time = datetime.now()
|
||||
|
||||
|
@ -209,10 +210,13 @@ class CalibreTask:
|
|||
# By default, we're good to clean a task if it's "Done"
|
||||
return self.stat in (STAT_FINISH_SUCCESS, STAT_FAIL)
|
||||
|
||||
@progress.setter
|
||||
def progress(self, x):
|
||||
# todo: throw error if outside of [0,1]
|
||||
self._progress = x
|
||||
'''@progress.setter
|
||||
def progress(self, x):
|
||||
if x > 1:
|
||||
x = 1
|
||||
if x < 0:
|
||||
x = 0
|
||||
self._progress = x'''
|
||||
|
||||
@property
|
||||
def self_cleanup(self):
|
||||
|
|
127
cps/shelf.py
127
cps/shelf.py
|
@ -21,20 +21,20 @@
|
|||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from __future__ import division, print_function, unicode_literals
|
||||
from datetime import datetime
|
||||
|
||||
import sys
|
||||
from datetime import datetime
|
||||
|
||||
from flask import Blueprint, request, flash, redirect, url_for
|
||||
from flask import Blueprint, flash, redirect, request, url_for
|
||||
from flask_babel import gettext as _
|
||||
from flask_login import login_required, current_user
|
||||
from flask_login import current_user, login_required
|
||||
from sqlalchemy.exc import InvalidRequestError, OperationalError
|
||||
from sqlalchemy.sql.expression import func, true
|
||||
from sqlalchemy.exc import OperationalError, InvalidRequestError
|
||||
|
||||
from . import logger, ub, calibre_db, db
|
||||
from . import calibre_db, config, db, logger, ub
|
||||
from .render_template import render_title_template
|
||||
from .usermanagement import login_required_if_no_ano
|
||||
|
||||
|
||||
shelf = Blueprint('shelf', __name__)
|
||||
log = logger.create()
|
||||
|
||||
|
@ -72,10 +72,9 @@ def add_to_shelf(shelf_id, book_id):
|
|||
|
||||
if not check_shelf_edit_permissions(shelf):
|
||||
if not xhr:
|
||||
flash(_(u"Sorry you are not allowed to add a book to the the shelf: %(shelfname)s", shelfname=shelf.name),
|
||||
category="error")
|
||||
flash(_(u"Sorry you are not allowed to add a book to that shelf"), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return "Sorry you are not allowed to add a book to the the shelf: %s" % shelf.name, 403
|
||||
return "Sorry you are not allowed to add a book to the that shelf", 403
|
||||
|
||||
book_in_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id,
|
||||
ub.BookShelf.book_id == book_id).first()
|
||||
|
@ -99,12 +98,14 @@ def add_to_shelf(shelf_id, book_id):
|
|||
ub.session.commit()
|
||||
except (OperationalError, InvalidRequestError):
|
||||
ub.session.rollback()
|
||||
log.error("Settings DB is not Writeable")
|
||||
flash(_(u"Settings DB is not Writeable"), category="error")
|
||||
if "HTTP_REFERER" in request.environ:
|
||||
return redirect(request.environ["HTTP_REFERER"])
|
||||
else:
|
||||
return redirect(url_for('web.index'))
|
||||
if not xhr:
|
||||
log.debug("Book has been added to shelf: {}".format(shelf.name))
|
||||
flash(_(u"Book has been added to shelf: %(sname)s", sname=shelf.name), category="success")
|
||||
if "HTTP_REFERER" in request.environ:
|
||||
return redirect(request.environ["HTTP_REFERER"])
|
||||
|
@ -123,6 +124,7 @@ def search_to_shelf(shelf_id):
|
|||
return redirect(url_for('web.index'))
|
||||
|
||||
if not check_shelf_edit_permissions(shelf):
|
||||
log.warning("You are not allowed to add a book to the the shelf: {}".format(shelf.name))
|
||||
flash(_(u"You are not allowed to add a book to the the shelf: %(name)s", name=shelf.name), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
|
@ -140,7 +142,7 @@ def search_to_shelf(shelf_id):
|
|||
books_for_shelf = ub.searched_ids[current_user.id]
|
||||
|
||||
if not books_for_shelf:
|
||||
log.error("Books are already part of %s", shelf.name)
|
||||
log.error("Books are already part of {}".format(shelf.name))
|
||||
flash(_(u"Books are already part of the shelf: %(name)s", name=shelf.name), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
|
@ -156,8 +158,10 @@ def search_to_shelf(shelf_id):
|
|||
flash(_(u"Books have been added to shelf: %(sname)s", sname=shelf.name), category="success")
|
||||
except (OperationalError, InvalidRequestError):
|
||||
ub.session.rollback()
|
||||
flash(_(u"Settings DB is not Writeable"), category="error")
|
||||
log.error("Settings DB is not Writeable")
|
||||
flash(_("Settings DB is not Writeable"), category="error")
|
||||
else:
|
||||
log.error("Could not add books to shelf: {}".format(shelf.name))
|
||||
flash(_(u"Could not add books to shelf: %(sname)s", sname=shelf.name), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
|
@ -168,7 +172,7 @@ def remove_from_shelf(shelf_id, book_id):
|
|||
xhr = request.headers.get('X-Requested-With') == 'XMLHttpRequest'
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
if shelf is None:
|
||||
log.error("Invalid shelf specified: %s", shelf_id)
|
||||
log.error("Invalid shelf specified: {}".format(shelf_id))
|
||||
if not xhr:
|
||||
return redirect(url_for('web.index'))
|
||||
return "Invalid shelf specified", 400
|
||||
|
@ -197,7 +201,8 @@ def remove_from_shelf(shelf_id, book_id):
|
|||
ub.session.commit()
|
||||
except (OperationalError, InvalidRequestError):
|
||||
ub.session.rollback()
|
||||
flash(_(u"Settings DB is not Writeable"), category="error")
|
||||
log.error("Settings DB is not Writeable")
|
||||
flash(_("Settings DB is not Writeable"), category="error")
|
||||
if "HTTP_REFERER" in request.environ:
|
||||
return redirect(request.environ["HTTP_REFERER"])
|
||||
else:
|
||||
|
@ -211,6 +216,7 @@ def remove_from_shelf(shelf_id, book_id):
|
|||
return "", 204
|
||||
else:
|
||||
if not xhr:
|
||||
log.warning("You are not allowed to remove a book from shelf: {}".format(shelf.name))
|
||||
flash(_(u"Sorry you are not allowed to remove a book from this shelf: %(sname)s", sname=shelf.name),
|
||||
category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
@ -221,74 +227,86 @@ def remove_from_shelf(shelf_id, book_id):
|
|||
@login_required
|
||||
def create_shelf():
|
||||
shelf = ub.Shelf()
|
||||
return create_edit_shelf(shelf, title=_(u"Create a Shelf"), page="shelfcreate")
|
||||
return create_edit_shelf(shelf, page_title=_(u"Create a Shelf"), page="shelfcreate")
|
||||
|
||||
|
||||
@shelf.route("/shelf/edit/<int:shelf_id>", methods=["GET", "POST"])
|
||||
@login_required
|
||||
def edit_shelf(shelf_id):
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
return create_edit_shelf(shelf, title=_(u"Edit a shelf"), page="shelfedit", shelf_id=shelf_id)
|
||||
if not check_shelf_edit_permissions(shelf):
|
||||
flash(_(u"Sorry you are not allowed to edit this shelf"), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return create_edit_shelf(shelf, page_title=_(u"Edit a shelf"), page="shelfedit", shelf_id=shelf_id)
|
||||
|
||||
|
||||
# if shelf ID is set, we are editing a shelf
|
||||
def create_edit_shelf(shelf, title, page, shelf_id=False):
|
||||
def create_edit_shelf(shelf, page_title, page, shelf_id=False):
|
||||
sync_only_selected_shelves = current_user.kobo_only_shelves_sync
|
||||
# calibre_db.session.query(ub.Shelf).filter(ub.Shelf.user_id == current_user.id).filter(ub.Shelf.kobo_sync).count()
|
||||
if request.method == "POST":
|
||||
to_save = request.form.to_dict()
|
||||
if "is_public" in to_save:
|
||||
shelf.is_public = 1
|
||||
else:
|
||||
shelf.is_public = 0
|
||||
if check_shelf_is_unique(shelf, to_save, shelf_id):
|
||||
shelf.name = to_save["title"]
|
||||
# shelf.last_modified = datetime.utcnow()
|
||||
shelf.is_public = 1 if to_save.get("is_public") else 0
|
||||
if config.config_kobo_sync:
|
||||
shelf.kobo_sync = True if to_save.get("kobo_sync") else False
|
||||
shelf_title = to_save.get("title", "")
|
||||
if check_shelf_is_unique(shelf, shelf_title, shelf_id):
|
||||
shelf.name = shelf_title
|
||||
if not shelf_id:
|
||||
shelf.user_id = int(current_user.id)
|
||||
ub.session.add(shelf)
|
||||
shelf_action = "created"
|
||||
flash_text = _(u"Shelf %(title)s created", title=to_save["title"])
|
||||
flash_text = _(u"Shelf %(title)s created", title=shelf_title)
|
||||
else:
|
||||
shelf_action = "changed"
|
||||
flash_text = _(u"Shelf %(title)s changed", title=to_save["title"])
|
||||
flash_text = _(u"Shelf %(title)s changed", title=shelf_title)
|
||||
try:
|
||||
ub.session.commit()
|
||||
log.info(u"Shelf {} {}".format(to_save["title"], shelf_action))
|
||||
log.info(u"Shelf {} {}".format(shelf_title, shelf_action))
|
||||
flash(flash_text, category="success")
|
||||
return redirect(url_for('shelf.show_shelf', shelf_id=shelf.id))
|
||||
except (OperationalError, InvalidRequestError) as e:
|
||||
except (OperationalError, InvalidRequestError) as ex:
|
||||
ub.session.rollback()
|
||||
log.debug_or_exception(e)
|
||||
flash(_(u"Settings DB is not Writeable"), category="error")
|
||||
except Exception as e:
|
||||
log.debug_or_exception(ex)
|
||||
log.error("Settings DB is not Writeable")
|
||||
flash(_("Settings DB is not Writeable"), category="error")
|
||||
except Exception as ex:
|
||||
ub.session.rollback()
|
||||
log.debug_or_exception(e)
|
||||
log.debug_or_exception(ex)
|
||||
flash(_(u"There was an error"), category="error")
|
||||
return render_title_template('shelf_edit.html', shelf=shelf, title=title, page=page)
|
||||
return render_title_template('shelf_edit.html',
|
||||
shelf=shelf,
|
||||
title=page_title,
|
||||
page=page,
|
||||
kobo_sync_enabled=config.config_kobo_sync,
|
||||
sync_only_selected_shelves=sync_only_selected_shelves)
|
||||
|
||||
|
||||
def check_shelf_is_unique(shelf, to_save, shelf_id=False):
|
||||
def check_shelf_is_unique(shelf, title, shelf_id=False):
|
||||
if shelf_id:
|
||||
ident = ub.Shelf.id != shelf_id
|
||||
else:
|
||||
ident = true()
|
||||
if shelf.is_public == 1:
|
||||
is_shelf_name_unique = ub.session.query(ub.Shelf) \
|
||||
.filter((ub.Shelf.name == to_save["title"]) & (ub.Shelf.is_public == 1)) \
|
||||
.filter((ub.Shelf.name == title) & (ub.Shelf.is_public == 1)) \
|
||||
.filter(ident) \
|
||||
.first() is None
|
||||
|
||||
if not is_shelf_name_unique:
|
||||
flash(_(u"A public shelf with the name '%(title)s' already exists.", title=to_save["title"]),
|
||||
log.error("A public shelf with the name '{}' already exists.".format(title))
|
||||
flash(_(u"A public shelf with the name '%(title)s' already exists.", title=title),
|
||||
category="error")
|
||||
else:
|
||||
is_shelf_name_unique = ub.session.query(ub.Shelf) \
|
||||
.filter((ub.Shelf.name == to_save["title"]) & (ub.Shelf.is_public == 0) &
|
||||
.filter((ub.Shelf.name == title) & (ub.Shelf.is_public == 0) &
|
||||
(ub.Shelf.user_id == int(current_user.id))) \
|
||||
.filter(ident) \
|
||||
.first() is None
|
||||
|
||||
if not is_shelf_name_unique:
|
||||
flash(_(u"A private shelf with the name '%(title)s' already exists.", title=to_save["title"]),
|
||||
log.error("A private shelf with the name '{}' already exists.".format(title))
|
||||
flash(_(u"A private shelf with the name '%(title)s' already exists.", title=title),
|
||||
category="error")
|
||||
return is_shelf_name_unique
|
||||
|
||||
|
@ -311,7 +329,8 @@ def delete_shelf(shelf_id):
|
|||
delete_shelf_helper(cur_shelf)
|
||||
except InvalidRequestError:
|
||||
ub.session.rollback()
|
||||
flash(_(u"Settings DB is not Writeable"), category="error")
|
||||
log.error("Settings DB is not Writeable")
|
||||
flash(_("Settings DB is not Writeable"), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
|
||||
|
@ -345,13 +364,14 @@ def order_shelf(shelf_id):
|
|||
ub.session.commit()
|
||||
except (OperationalError, InvalidRequestError):
|
||||
ub.session.rollback()
|
||||
flash(_(u"Settings DB is not Writeable"), category="error")
|
||||
log.error("Settings DB is not Writeable")
|
||||
flash(_("Settings DB is not Writeable"), category="error")
|
||||
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
result = list()
|
||||
if shelf and check_shelf_view_permissions(shelf):
|
||||
result = calibre_db.session.query(db.Books)\
|
||||
.join(ub.BookShelf,ub.BookShelf.book_id == db.Books.id , isouter=True) \
|
||||
result = calibre_db.session.query(db.Books) \
|
||||
.join(ub.BookShelf, ub.BookShelf.book_id == db.Books.id, isouter=True) \
|
||||
.add_columns(calibre_db.common_filters().label("visible")) \
|
||||
.filter(ub.BookShelf.shelf == shelf_id).order_by(ub.BookShelf.order.asc()).all()
|
||||
return render_title_template('shelf_order.html', entries=result,
|
||||
|
@ -360,7 +380,9 @@ def order_shelf(shelf_id):
|
|||
|
||||
|
||||
def change_shelf_order(shelf_id, order):
|
||||
result = calibre_db.session.query(db.Books).join(ub.BookShelf,ub.BookShelf.book_id == db.Books.id)\
|
||||
result = calibre_db.session.query(db.Books).outerjoin(db.books_series_link,
|
||||
db.Books.id == db.books_series_link.c.book)\
|
||||
.outerjoin(db.Series).join(ub.BookShelf, ub.BookShelf.book_id == db.Books.id) \
|
||||
.filter(ub.BookShelf.shelf == shelf_id).order_by(*order).all()
|
||||
for index, entry in enumerate(result):
|
||||
book = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id) \
|
||||
|
@ -390,9 +412,11 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
|||
if sort_param == 'old':
|
||||
change_shelf_order(shelf_id, [db.Books.timestamp])
|
||||
if sort_param == 'authaz':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.asc()])
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.asc(), db.Series.name, db.Books.series_index])
|
||||
if sort_param == 'authza':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.desc()])
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.desc(),
|
||||
db.Series.name.desc(),
|
||||
db.Books.series_index.desc()])
|
||||
page = "shelf.html"
|
||||
pagesize = 0
|
||||
else:
|
||||
|
@ -400,13 +424,13 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
|||
page = 'shelfdown.html'
|
||||
|
||||
result, __, pagination = calibre_db.fill_indexpage(page_no, pagesize,
|
||||
db.Books,
|
||||
ub.BookShelf.shelf == shelf_id,
|
||||
[ub.BookShelf.order.asc()],
|
||||
ub.BookShelf, ub.BookShelf.book_id == db.Books.id)
|
||||
db.Books,
|
||||
ub.BookShelf.shelf == shelf_id,
|
||||
[ub.BookShelf.order.asc()],
|
||||
ub.BookShelf, ub.BookShelf.book_id == db.Books.id)
|
||||
# delete chelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
|
||||
wrong_entries = calibre_db.session.query(ub.BookShelf)\
|
||||
.join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True)\
|
||||
wrong_entries = calibre_db.session.query(ub.BookShelf) \
|
||||
.join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True) \
|
||||
.filter(db.Books.id == None).all()
|
||||
for entry in wrong_entries:
|
||||
log.info('Not existing book {} in {} deleted'.format(entry.book_id, shelf))
|
||||
|
@ -415,7 +439,8 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
|||
ub.session.commit()
|
||||
except (OperationalError, InvalidRequestError):
|
||||
ub.session.rollback()
|
||||
flash(_(u"Settings DB is not Writeable"), category="error")
|
||||
log.error("Settings DB is not Writeable")
|
||||
flash(_("Settings DB is not Writeable"), category="error")
|
||||
|
||||
return render_title_template(page,
|
||||
entries=result,
|
||||
|
|
BIN
cps/static/cmaps/78-EUC-H.bcmap
Normal file
BIN
cps/static/cmaps/78-EUC-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/78-EUC-V.bcmap
Normal file
BIN
cps/static/cmaps/78-EUC-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/78-H.bcmap
Normal file
BIN
cps/static/cmaps/78-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/78-RKSJ-H.bcmap
Normal file
BIN
cps/static/cmaps/78-RKSJ-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/78-RKSJ-V.bcmap
Normal file
BIN
cps/static/cmaps/78-RKSJ-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/78-V.bcmap
Normal file
BIN
cps/static/cmaps/78-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/78ms-RKSJ-H.bcmap
Normal file
BIN
cps/static/cmaps/78ms-RKSJ-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/78ms-RKSJ-V.bcmap
Normal file
BIN
cps/static/cmaps/78ms-RKSJ-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/83pv-RKSJ-H.bcmap
Normal file
BIN
cps/static/cmaps/83pv-RKSJ-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/90ms-RKSJ-H.bcmap
Normal file
BIN
cps/static/cmaps/90ms-RKSJ-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/90ms-RKSJ-V.bcmap
Normal file
BIN
cps/static/cmaps/90ms-RKSJ-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/90msp-RKSJ-H.bcmap
Normal file
BIN
cps/static/cmaps/90msp-RKSJ-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/90msp-RKSJ-V.bcmap
Normal file
BIN
cps/static/cmaps/90msp-RKSJ-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/90pv-RKSJ-H.bcmap
Normal file
BIN
cps/static/cmaps/90pv-RKSJ-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/90pv-RKSJ-V.bcmap
Normal file
BIN
cps/static/cmaps/90pv-RKSJ-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Add-H.bcmap
Normal file
BIN
cps/static/cmaps/Add-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Add-RKSJ-H.bcmap
Normal file
BIN
cps/static/cmaps/Add-RKSJ-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Add-RKSJ-V.bcmap
Normal file
BIN
cps/static/cmaps/Add-RKSJ-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Add-V.bcmap
Normal file
BIN
cps/static/cmaps/Add-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-CNS1-0.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-CNS1-0.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-CNS1-1.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-CNS1-1.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-CNS1-2.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-CNS1-2.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-CNS1-3.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-CNS1-3.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-CNS1-4.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-CNS1-4.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-CNS1-5.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-CNS1-5.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-CNS1-6.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-CNS1-6.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-CNS1-UCS2.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-CNS1-UCS2.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-GB1-0.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-GB1-0.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-GB1-1.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-GB1-1.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-GB1-2.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-GB1-2.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-GB1-3.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-GB1-3.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-GB1-4.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-GB1-4.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-GB1-5.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-GB1-5.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-GB1-UCS2.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-GB1-UCS2.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Japan1-0.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Japan1-0.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Japan1-1.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Japan1-1.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Japan1-2.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Japan1-2.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Japan1-3.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Japan1-3.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Japan1-4.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Japan1-4.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Japan1-5.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Japan1-5.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Japan1-6.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Japan1-6.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Japan1-UCS2.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Japan1-UCS2.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Korea1-0.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Korea1-0.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Korea1-1.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Korea1-1.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Korea1-2.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Korea1-2.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/Adobe-Korea1-UCS2.bcmap
Normal file
BIN
cps/static/cmaps/Adobe-Korea1-UCS2.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/B5-H.bcmap
Normal file
BIN
cps/static/cmaps/B5-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/B5-V.bcmap
Normal file
BIN
cps/static/cmaps/B5-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/B5pc-H.bcmap
Normal file
BIN
cps/static/cmaps/B5pc-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/B5pc-V.bcmap
Normal file
BIN
cps/static/cmaps/B5pc-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/CNS-EUC-H.bcmap
Normal file
BIN
cps/static/cmaps/CNS-EUC-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/CNS-EUC-V.bcmap
Normal file
BIN
cps/static/cmaps/CNS-EUC-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/CNS1-H.bcmap
Normal file
BIN
cps/static/cmaps/CNS1-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/CNS1-V.bcmap
Normal file
BIN
cps/static/cmaps/CNS1-V.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/CNS2-H.bcmap
Normal file
BIN
cps/static/cmaps/CNS2-H.bcmap
Normal file
Binary file not shown.
3
cps/static/cmaps/CNS2-V.bcmap
Normal file
3
cps/static/cmaps/CNS2-V.bcmap
Normal file
|
@ -0,0 +1,3 @@
|
|||
àRCopyright 1990-2009 Adobe Systems Incorporated.
|
||||
All rights reserved.
|
||||
See ./LICENSEáCNS2-H
|
BIN
cps/static/cmaps/ETHK-B5-H.bcmap
Normal file
BIN
cps/static/cmaps/ETHK-B5-H.bcmap
Normal file
Binary file not shown.
BIN
cps/static/cmaps/ETHK-B5-V.bcmap
Normal file
BIN
cps/static/cmaps/ETHK-B5-V.bcmap
Normal file
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user