diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index 34d2c453..bfb4688d 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -6,6 +6,7 @@ labels: '' assignees: '' --- + **Describe the bug/problem** A clear and concise description of what the bug is. If you are asking for support, please check our [Wiki](https://github.com/janeczku/calibre-web/wiki) if your question is already answered there. @@ -30,7 +31,7 @@ If applicable, add screenshots to help explain your problem. - OS: [e.g. Windows 10/Raspberry Pi OS] - Python version: [e.g. python2.7] - Calibre-Web version: [e.g. 0.6.8 or 087c4c59 (git rev-parse --short HEAD)]: - - Docker container: [None/Technosoft2000/Linuxuser]: + - Docker container: [None/Technosoft2000/LinuxServer]: - Special Hardware: [e.g. Rasperry Pi Zero] - Browser: [e.g. Chrome 83.0.4103.97, Safari 13.3.7, Firefox 68.0.1 ESR] diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index 1a71b1ac..6b2b9afc 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -7,6 +7,8 @@ assignees: '' --- + + **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] diff --git a/.gitignore b/.gitignore index f06dcd44..109de4ef 100644 --- a/.gitignore +++ b/.gitignore @@ -10,6 +10,7 @@ env/ venv/ eggs/ dist/ +executable/ build/ vendor/ .eggs/ diff --git a/README.md b/README.md index 82c7976c..3f0792bf 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ Calibre-Web is a web app providing a clean interface for browsing, reading and d - full graphical setup - User management with fine-grained per-user permissions - Admin interface -- User Interface in czech, dutch, english, finnish, french, german, hungarian, italian, japanese, khmer, polish, russian, simplified chinese, spanish, swedish, turkish, ukrainian +- User Interface in czech, dutch, english, finnish, french, german, greek, hungarian, italian, japanese, khmer, polish, russian, simplified chinese, spanish, swedish, turkish, ukrainian - OPDS feed for eBook reader apps - Filter and search by titles, authors, tags, series and language - Create a custom book collection (shelves) @@ -32,8 +32,8 @@ Calibre-Web is a web app providing a clean interface for browsing, reading and d ## Quick start -1. Install dependencies by running `pip3 install --target vendor -r requirements.txt` (python3.x) or `pip install --target vendor -r requirements.txt` (python2.7). -2. Execute the command: `python cps.py` (or `nohup python cps.py` - recommended if you want to exit the terminal window) +1. Install dependencies by running `pip3 install --target vendor -r requirements.txt` (python3.x). Alternativly set up a python virtual environment. +2. Execute the command: `python3 cps.py` (or `nohup python3 cps.py` - recommended if you want to exit the terminal window) 3. Point your browser to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog 4. Set `Location of Calibre database` to the path of the folder where your Calibre library (metadata.db) lives, push "submit" button\ Optionally a Google Drive can be used to host the calibre library [-> Using Google Drive integration](https://github.com/janeczku/calibre-web/wiki/Configuration#using-google-drive-integration) @@ -48,27 +48,27 @@ Please note that running the above install command can fail on some versions of ## Requirements -python 3.x+, (Python 2.7+) +python 3.x+ Optionally, to enable on-the-fly conversion from one ebook format to another when using the send-to-kindle feature, or during editing of ebooks metadata: [Download and install](https://calibre-ebook.com/download) the Calibre desktop program for your platform and enter the folder including program name (normally /opt/calibre/ebook-convert, or C:\Program Files\calibre\ebook-convert.exe) in the field "calibre's converter tool" on the setup page. -[Download](https://github.com/geek1011/kepubify/releases/tag/v3.1.2) Kepubify tool for your platform and place the binary starting with `kepubify` in Linux: `\opt\kepubify` Windows: `C:\Program Files\kepubify`. +[Download](https://github.com/pgaskin/kepubify/releases/latest) Kepubify tool for your platform and place the binary starting with `kepubify` in Linux: `\opt\kepubify` Windows: `C:\Program Files\kepubify`. ## Docker Images Pre-built Docker images are available in these Docker Hub repositories: #### **Technosoft2000 - x64** -+ Docker Hub - [https://hub.docker.com/r/technosoft2000/calibre-web/](https://hub.docker.com/r/technosoft2000/calibre-web/) ++ Docker Hub - [https://hub.docker.com/r/technosoft2000/calibre-web](https://hub.docker.com/r/technosoft2000/calibre-web) + Github - [https://github.com/Technosoft2000/docker-calibre-web](https://github.com/Technosoft2000/docker-calibre-web) Includes the Calibre `ebook-convert` binary. + The "path to convertertool" should be set to `/opt/calibre/ebook-convert` #### **LinuxServer - x64, armhf, aarch64** -+ Docker Hub - [https://hub.docker.com/r/linuxserver/calibre-web/](https://hub.docker.com/r/linuxserver/calibre-web/) ++ Docker Hub - [https://hub.docker.com/r/linuxserver/calibre-web](https://hub.docker.com/r/linuxserver/calibre-web) + Github - [https://github.com/linuxserver/docker-calibre-web](https://github.com/linuxserver/docker-calibre-web) + Github - (Optional Calibre layer) - [https://github.com/linuxserver/docker-calibre-web/tree/calibre](https://github.com/linuxserver/docker-calibre-web/tree/calibre) @@ -83,3 +83,7 @@ Pre-built Docker images are available in these Docker Hub repositories: # Wiki For further information, How To's and FAQ please check the [Wiki](https://github.com/janeczku/calibre-web/wiki) + +# Contributing to Calibre-Web + +Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) diff --git a/cps.py b/cps.py index 3601e4b6..737b0d97 100755 --- a/cps.py +++ b/cps.py @@ -41,6 +41,8 @@ from cps.shelf import shelf from cps.admin import admi from cps.gdrive import gdrive from cps.editbooks import editbook +from cps.remotelogin import remotelogin +from cps.error_handler import init_errorhandler try: from cps.kobo import kobo, get_kobo_activated @@ -58,12 +60,17 @@ except ImportError: def main(): app = create_app() + + init_errorhandler() + app.register_blueprint(web) app.register_blueprint(opds) app.register_blueprint(jinjia) app.register_blueprint(about) app.register_blueprint(shelf) app.register_blueprint(admi) + app.register_blueprint(remotelogin) + # if config.config_use_google_drive: app.register_blueprint(gdrive) app.register_blueprint(editbook) if kobo_available: diff --git a/cps/__init__.py b/cps/__init__.py index d557649c..627cca0b 100644 --- a/cps/__init__.py +++ b/cps/__init__.py @@ -45,6 +45,7 @@ mimetypes.add_type('application/fb2+zip', '.fb2') mimetypes.add_type('application/x-mobipocket-ebook', '.mobi') mimetypes.add_type('application/x-mobipocket-ebook', '.prc') mimetypes.add_type('application/vnd.amazon.ebook', '.azw') +mimetypes.add_type('application/x-mobi8-ebook', '.azw3') mimetypes.add_type('application/x-cbr', '.cbr') mimetypes.add_type('application/x-cbz', '.cbz') mimetypes.add_type('application/x-cbt', '.cbt') @@ -73,7 +74,6 @@ ub.init_db(cli.settingspath) # pylint: disable=no-member config = config_sql.load_configuration(ub.session) -searched_ids = {} web_server = WebServer() babel = Babel() @@ -83,6 +83,8 @@ log = logger.create() from . import services +db.CalibreDB.setup_db(config, cli.settingspath) + calibre_db = db.CalibreDB() def create_app(): @@ -91,18 +93,20 @@ def create_app(): if sys.version_info < (3, 0): app.static_folder = app.static_folder.decode('utf-8') app.root_path = app.root_path.decode('utf-8') - app.instance_path = app.instance_path .decode('utf-8') + app.instance_path = app.instance_path.decode('utf-8') - cache_buster.init_cache_busting(app) + if os.environ.get('FLASK_DEBUG'): + cache_buster.init_cache_busting(app) log.info('Starting Calibre Web...') + if sys.version_info < (3, 0): + log.info('Python2 is EOL since end of 2019, this version of Calibre-Web supporting Python2 please consider upgrading to Python3') + print('Python2 is EOL since end of 2019, this version of Calibre-Web supporting Python2 please consider upgrading to Python3') Principal(app) lm.init_app(app) app.secret_key = os.getenv('SECRET_KEY', config_sql.get_flask_session_key(ub.session)) web_server.init_app(app, config) - calibre_db.setup_db(config, cli.settingspath) - calibre_db.start() babel.init_app(app) _BABEL_TRANSLATIONS.update(str(item) for item in babel.list_translations()) diff --git a/cps/about.py b/cps/about.py index 676d91db..2e9cc699 100644 --- a/cps/about.py +++ b/cps/about.py @@ -31,7 +31,7 @@ import werkzeug, flask, flask_login, flask_principal, jinja2 from flask_babel import gettext as _ from . import db, calibre_db, converter, uploader, server, isoLanguages, constants -from .web import render_title_template +from .render_template import render_title_template try: from flask_login import __version__ as flask_loginVersion except ImportError: @@ -48,6 +48,11 @@ try: except ImportError: flask_danceVersion = None +try: + from greenlet import __version__ as greenlet_Version +except ImportError: + greenlet_Version = None + from . import services about = flask.Blueprint('about', __name__) @@ -77,11 +82,18 @@ _VERSIONS = OrderedDict( python_LDAP = services.ldapVersion if bool(services.ldapVersion) else None, Goodreads = u'installed' if bool(services.goodreads_support) else None, jsonschema = services.SyncToken.__version__ if bool(services.SyncToken) else None, - flask_dance = flask_danceVersion + flask_dance = flask_danceVersion, + greenlet = greenlet_Version ) _VERSIONS.update(uploader.get_versions()) +def collect_stats(): + _VERSIONS['ebook converter'] = _(converter.get_calibre_version()) + _VERSIONS['unrar'] = _(converter.get_unrar_version()) + _VERSIONS['kepubify'] = _(converter.get_kepubify_version()) + return _VERSIONS + @about.route("/stats") @flask_login.login_required def stats(): @@ -89,8 +101,7 @@ def stats(): authors = calibre_db.session.query(db.Authors).count() categorys = calibre_db.session.query(db.Tags).count() series = calibre_db.session.query(db.Series).count() - _VERSIONS['ebook converter'] = _(converter.get_calibre_version()) - _VERSIONS['unrar'] = _(converter.get_unrar_version()) - _VERSIONS['kepubify'] = _(converter.get_kepubify_version()) - return render_title_template('stats.html', bookcounter=counter, authorcounter=authors, versions=_VERSIONS, + return render_title_template('stats.html', bookcounter=counter, authorcounter=authors, versions=collect_stats(), categorycounter=categorys, seriecounter=series, title=_(u"Statistics"), page="stat") + + diff --git a/cps/admin.py b/cps/admin.py index 424a12b4..b932bd8e 100644 --- a/cps/admin.py +++ b/cps/admin.py @@ -5,7 +5,7 @@ # andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh, # falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe, # ruben-herold, marblepebble, JackED42, SiphonSquirrel, -# apetresc, nanu-c, mutschler +# apetresc, nanu-c, mutschler, GammaC0de, vuolter # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by @@ -26,35 +26,45 @@ import re import base64 import json import time +import operator from datetime import datetime, timedelta from babel import Locale as LC from babel.dates import format_datetime -from flask import Blueprint, flash, redirect, url_for, abort, request, make_response, send_from_directory -from flask_login import login_required, current_user, logout_user +from flask import Blueprint, flash, redirect, url_for, abort, request, make_response, send_from_directory, g +from flask_login import login_required, current_user, logout_user, confirm_login from flask_babel import gettext as _ from sqlalchemy import and_ from sqlalchemy.exc import IntegrityError, OperationalError, InvalidRequestError -from sqlalchemy.sql.expression import func +from sqlalchemy.sql.expression import func, or_ from . import constants, logger, helper, services +from .cli import filepicker from . import db, calibre_db, ub, web_server, get_locale, config, updater_thread, babel, gdriveutils from .helper import check_valid_domain, send_test_mail, reset_password, generate_password_hash from .gdriveutils import is_gdrive_ready, gdrive_support -from .web import admin_required, render_title_template, before_request, unconfigured, login_required_if_no_ano +from .render_template import render_title_template, get_sidebar_config +from . import debug_info + +try: + from functools import wraps +except ImportError: + pass # We're not using Python 3 log = logger.create() feature_support = { 'ldap': bool(services.ldap), 'goodreads': bool(services.goodreads_support), - 'kobo': bool(services.kobo) + 'kobo': bool(services.kobo), + 'updater': constants.UPDATER_AVAILABLE } try: + # pylint: disable=unused-import import rarfile feature_support['rar'] = True -except ImportError: +except (ImportError, SyntaxError): feature_support['rar'] = False try: @@ -71,6 +81,52 @@ feature_support['gdrive'] = gdrive_support admi = Blueprint('admin', __name__) +def admin_required(f): + """ + Checks if current_user.role == 1 + """ + + @wraps(f) + def inner(*args, **kwargs): + if current_user.role_admin(): + return f(*args, **kwargs) + abort(403) + + return inner + + +def unconfigured(f): + """ + Checks if calibre-web instance is not configured + """ + @wraps(f) + def inner(*args, **kwargs): + if not config.db_configured: + return f(*args, **kwargs) + abort(403) + + return inner + + +@admi.before_app_request +def before_request(): + if current_user.is_authenticated: + confirm_login() + g.constants = constants + g.user = current_user + g.allow_registration = config.config_public_reg + g.allow_anonymous = config.config_anonbrowse + g.allow_upload = config.config_uploading + g.current_theme = config.config_theme + g.config_authors_max = config.config_authors_max + g.shelves_access = ub.session.query(ub.Shelf).filter( + or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all() + if '/static/' not in request.path and not config.db_configured and \ + request.endpoint not in ('admin.basic_configuration', + 'login', + 'admin.config_pathchooser'): + return redirect(url_for('admin.basic_configuration')) + @admi.route("/admin") @login_required @@ -129,10 +185,11 @@ def admin(): else: commit = version['version'] - allUser = ub.session.query(ub.User).all() + all_user = ub.session.query(ub.User).all() email_settings = config.get_mail_settings() - return render_title_template("admin.html", allUser=allUser, email=email_settings, config=config, commit=commit, - feature_support=feature_support, + kobo_support = feature_support['kobo'] and config.config_kobo_sync + return render_title_template("admin.html", allUser=all_user, email=email_settings, config=config, commit=commit, + feature_support=feature_support, kobo_support=kobo_support, title=_(u"Admin page"), page="admin") @@ -141,7 +198,7 @@ def admin(): @admin_required def configuration(): if request.method == "POST": - return _configuration_update_helper() + return _configuration_update_helper(True) return _configuration_result() @@ -149,12 +206,12 @@ def configuration(): @login_required @admin_required def view_configuration(): - readColumn = calibre_db.session.query(db.Custom_Columns)\ - .filter(and_(db.Custom_Columns.datatype == 'bool', db.Custom_Columns.mark_for_delete == 0)).all() - restrictColumns= calibre_db.session.query(db.Custom_Columns)\ - .filter(and_(db.Custom_Columns.datatype == 'text', db.Custom_Columns.mark_for_delete == 0)).all() - return render_title_template("config_view_edit.html", conf=config, readColumns=readColumn, - restrictColumns=restrictColumns, + read_column = calibre_db.session.query(db.Custom_Columns)\ + .filter(and_(db.Custom_Columns.datatype == 'bool', db.Custom_Columns.mark_for_delete == 0)).all() + restrict_columns = calibre_db.session.query(db.Custom_Columns)\ + .filter(and_(db.Custom_Columns.datatype == 'text', db.Custom_Columns.mark_for_delete == 0)).all() + return render_title_template("config_view_edit.html", conf=config, readColumns=read_column, + restrictColumns=restrict_columns, title=_(u"UI Configuration"), page="uiconfig") @@ -162,7 +219,6 @@ def view_configuration(): @login_required @admin_required def update_view_configuration(): - reboot_required = False to_save = request.form.to_dict() _config_string = lambda x: config.set_from_dictionary(to_save, x, lambda y: y.strip() if y else y) @@ -170,7 +226,8 @@ def update_view_configuration(): _config_string("config_calibre_web_title") _config_string("config_columns_to_ignore") - reboot_required |= _config_string("config_title_regex") + if _config_string("config_title_regex"): + calibre_db.update_title_sort(config) _config_int("config_read_column") _config_int("config_theme") @@ -189,14 +246,25 @@ def update_view_configuration(): config.save() flash(_(u"Calibre-Web configuration updated"), category="success") before_request() - if reboot_required: - db.dispose() - ub.dispose() - web_server.stop(True) return view_configuration() +@admi.route("/ajax/loaddialogtexts/") +@login_required +def load_dialogtexts(element_id): + texts = {"header": "", "main": ""} + if element_id == "config_delete_kobo_token": + texts["main"] = _('Do you really want to delete the Kobo Token?') + elif element_id == "btndeletedomain": + texts["main"] = _('Do you really want to delete this domain?') + elif element_id == "btndeluser": + texts["main"] = _('Do you really want to delete this user?') + elif element_id == "delete_shelf": + texts["main"] = _('Are you sure you want to delete this shelf?') + return json.dumps(texts) + + @admi.route("/ajax/editdomain/", methods=['POST']) @login_required @admin_required @@ -208,8 +276,7 @@ def edit_domain(allow): vals = request.form.to_dict() answer = ub.session.query(ub.Registration).filter(ub.Registration.id == vals['pk']).first() answer.domain = vals['value'].replace('*', '%').replace('?', '_').lower() - ub.session.commit() - return "" + return ub.session_commit("Registering Domains edited {}".format(answer.domain)) @admi.route("/ajax/adddomain/", methods=['POST']) @@ -217,11 +284,12 @@ def edit_domain(allow): @admin_required def add_domain(allow): domain_name = request.form.to_dict()['domainname'].replace('*', '%').replace('?', '_').lower() - check = ub.session.query(ub.Registration).filter(ub.Registration.domain == domain_name).filter(ub.Registration.allow == allow).first() + check = ub.session.query(ub.Registration).filter(ub.Registration.domain == domain_name)\ + .filter(ub.Registration.allow == allow).first() if not check: new_domain = ub.Registration(domain=domain_name, allow=allow) ub.session.add(new_domain) - ub.session.commit() + ub.session_commit("Registering Domains added {}".format(domain_name)) return "" @@ -229,14 +297,17 @@ def add_domain(allow): @login_required @admin_required def delete_domain(): - domain_id = request.form.to_dict()['domainid'].replace('*', '%').replace('?', '_').lower() - ub.session.query(ub.Registration).filter(ub.Registration.id == domain_id).delete() - ub.session.commit() - # If last domain was deleted, add all domains by default - if not ub.session.query(ub.Registration).filter(ub.Registration.allow==1).count(): - new_domain = ub.Registration(domain="%.%",allow=1) - ub.session.add(new_domain) - ub.session.commit() + try: + domain_id = request.form.to_dict()['domainid'].replace('*', '%').replace('?', '_').lower() + ub.session.query(ub.Registration).filter(ub.Registration.id == domain_id).delete() + ub.session_commit("Registering Domains deleted {}".format(domain_id)) + # If last domain was deleted, add all domains by default + if not ub.session.query(ub.Registration).filter(ub.Registration.allow == 1).count(): + new_domain = ub.Registration(domain="%.%", allow=1) + ub.session.add(new_domain) + ub.session_commit("Last Registering Domain deleted, added *.* as default") + except KeyError: + pass return "" @@ -251,75 +322,74 @@ def list_domain(allow): response.headers["Content-Type"] = "application/json; charset=utf-8" return response -@admi.route("/ajax/editrestriction/", methods=['POST']) + +@admi.route("/ajax/editrestriction/", defaults={"user_id": 0}, methods=['POST']) +@admi.route("/ajax/editrestriction//", methods=['POST']) @login_required @admin_required -def edit_restriction(res_type): +def edit_restriction(res_type, user_id): element = request.form.to_dict() if element['id'].startswith('a'): if res_type == 0: # Tags as template elementlist = config.list_allowed_tags() - elementlist[int(element['id'][1:])]=element['Element'] + elementlist[int(element['id'][1:])] = element['Element'] config.config_allowed_tags = ','.join(elementlist) config.save() if res_type == 1: # CustomC elementlist = config.list_allowed_column_values() - elementlist[int(element['id'][1:])]=element['Element'] + elementlist[int(element['id'][1:])] = element['Element'] config.config_allowed_column_value = ','.join(elementlist) config.save() if res_type == 2: # Tags per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: - usr = ub.session.query(ub.User).filter(ub.User.id == int(usr_id)).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() else: usr = current_user elementlist = usr.list_allowed_tags() - elementlist[int(element['id'][1:])]=element['Element'] + elementlist[int(element['id'][1:])] = element['Element'] usr.allowed_tags = ','.join(elementlist) - ub.session.commit() + ub.session_commit("Changed allowed tags of user {} to {}".format(usr.nickname, usr.allowed_tags)) if res_type == 3: # CColumn per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: - usr = ub.session.query(ub.User).filter(ub.User.id == int(usr_id)).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() else: usr = current_user elementlist = usr.list_allowed_column_values() - elementlist[int(element['id'][1:])]=element['Element'] + elementlist[int(element['id'][1:])] = element['Element'] usr.allowed_column_value = ','.join(elementlist) - ub.session.commit() + ub.session_commit("Changed allowed columns of user {} to {}".format(usr.nickname, usr.allowed_column_value)) if element['id'].startswith('d'): if res_type == 0: # Tags as template elementlist = config.list_denied_tags() - elementlist[int(element['id'][1:])]=element['Element'] + elementlist[int(element['id'][1:])] = element['Element'] config.config_denied_tags = ','.join(elementlist) config.save() if res_type == 1: # CustomC elementlist = config.list_denied_column_values() - elementlist[int(element['id'][1:])]=element['Element'] + elementlist[int(element['id'][1:])] = element['Element'] config.config_denied_column_value = ','.join(elementlist) config.save() if res_type == 2: # Tags per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: - usr = ub.session.query(ub.User).filter(ub.User.id == int(usr_id)).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() else: usr = current_user elementlist = usr.list_denied_tags() - elementlist[int(element['id'][1:])]=element['Element'] + elementlist[int(element['id'][1:])] = element['Element'] usr.denied_tags = ','.join(elementlist) - ub.session.commit() + ub.session_commit("Changed denied tags of user {} to {}".format(usr.nickname, usr.denied_tags)) if res_type == 3: # CColumn per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: - usr = ub.session.query(ub.User).filter(ub.User.id == int(usr_id)).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() else: usr = current_user elementlist = usr.list_denied_column_values() - elementlist[int(element['id'][1:])]=element['Element'] + elementlist[int(element['id'][1:])] = element['Element'] usr.denied_column_value = ','.join(elementlist) - ub.session.commit() + ub.session_commit("Changed denied columns of user {} to {}".format(usr.nickname, usr.denied_column_value)) return "" + def restriction_addition(element, list_func): elementlist = list_func() if elementlist == ['']: @@ -336,10 +406,11 @@ def restriction_deletion(element, list_func): return ','.join(elementlist) -@admi.route("/ajax/addrestriction/", methods=['POST']) +@admi.route("/ajax/addrestriction/", defaults={"user_id": 0}, methods=['POST']) +@admi.route("/ajax/addrestriction//", methods=['POST']) @login_required @admin_required -def add_restriction(res_type): +def add_restriction(res_type, user_id): element = request.form.to_dict() if res_type == 0: # Tags as template if 'submit_allow' in element: @@ -356,35 +427,37 @@ def add_restriction(res_type): config.config_denied_column_value = restriction_addition(element, config.list_allowed_column_values) config.save() if res_type == 2: # Tags per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: - usr = ub.session.query(ub.User).filter(ub.User.id == int(usr_id)).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() else: usr = current_user if 'submit_allow' in element: usr.allowed_tags = restriction_addition(element, usr.list_allowed_tags) - ub.session.commit() + ub.session_commit("Changed allowed tags of user {} to {}".format(usr.nickname, usr.list_allowed_tags)) elif 'submit_deny' in element: usr.denied_tags = restriction_addition(element, usr.list_denied_tags) - ub.session.commit() + ub.session_commit("Changed denied tags of user {} to {}".format(usr.nickname, usr.list_denied_tags)) if res_type == 3: # CustomC per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: - usr = ub.session.query(ub.User).filter(ub.User.id == int(usr_id)).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() else: usr = current_user if 'submit_allow' in element: usr.allowed_column_value = restriction_addition(element, usr.list_allowed_column_values) - ub.session.commit() + ub.session_commit("Changed allowed columns of user {} to {}".format(usr.nickname, + usr.list_allowed_column_values)) elif 'submit_deny' in element: usr.denied_column_value = restriction_addition(element, usr.list_denied_column_values) - ub.session.commit() + ub.session_commit("Changed denied columns of user {} to {}".format(usr.nickname, + usr.list_denied_column_values)) return "" -@admi.route("/ajax/deleterestriction/", methods=['POST']) + +@admi.route("/ajax/deleterestriction/", defaults={"user_id": 0}, methods=['POST']) +@admi.route("/ajax/deleterestriction//", methods=['POST']) @login_required @admin_required -def delete_restriction(res_type): +def delete_restriction(res_type, user_id): element = request.form.to_dict() if res_type == 0: # Tags as template if element['id'].startswith('a'): @@ -401,85 +474,182 @@ def delete_restriction(res_type): config.config_denied_column_value = restriction_deletion(element, config.list_denied_column_values) config.save() elif res_type == 2: # Tags per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: - usr = ub.session.query(ub.User).filter(ub.User.id == int(usr_id)).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() else: usr = current_user if element['id'].startswith('a'): usr.allowed_tags = restriction_deletion(element, usr.list_allowed_tags) - ub.session.commit() + ub.session_commit("Deleted allowed tags of user {}: {}".format(usr.nickname, usr.list_allowed_tags)) elif element['id'].startswith('d'): usr.denied_tags = restriction_deletion(element, usr.list_denied_tags) - ub.session.commit() + ub.session_commit("Deleted denied tags of user {}: {}".format(usr.nickname, usr.list_allowed_tags)) elif res_type == 3: # Columns per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: # select current user if admins are editing their own rights - usr = ub.session.query(ub.User).filter(ub.User.id == int(usr_id)).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() else: usr = current_user if element['id'].startswith('a'): usr.allowed_column_value = restriction_deletion(element, usr.list_allowed_column_values) - ub.session.commit() + ub.session_commit("Deleted allowed columns of user {}: {}".format(usr.nickname, + usr.list_allowed_column_values)) + elif element['id'].startswith('d'): usr.denied_column_value = restriction_deletion(element, usr.list_denied_column_values) - ub.session.commit() + ub.session_commit("Deleted denied columns of user {}: {}".format(usr.nickname, + usr.list_denied_column_values)) return "" -@admi.route("/ajax/listrestriction/") +@admi.route("/ajax/listrestriction/", defaults={"user_id": 0}) +@admi.route("/ajax/listrestriction//") @login_required @admin_required -def list_restriction(res_type): +def list_restriction(res_type, user_id): if res_type == 0: # Tags as template restrict = [{'Element': x, 'type':_('Deny'), 'id': 'd'+str(i) } - for i,x in enumerate(config.list_denied_tags()) if x != '' ] - allow = [{'Element': x, 'type':_('Allow'), 'id': 'a'+str(i) } - for i,x in enumerate(config.list_allowed_tags()) if x != ''] + for i,x in enumerate(config.list_denied_tags()) if x != ''] + allow = [{'Element': x, 'type': _('Allow'), 'id': 'a'+str(i)} + for i, x in enumerate(config.list_allowed_tags()) if x != ''] json_dumps = restrict + allow elif res_type == 1: # CustomC as template - restrict = [{'Element': x, 'type':_('Deny'), 'id': 'd'+str(i) } - for i,x in enumerate(config.list_denied_column_values()) if x != '' ] - allow = [{'Element': x, 'type':_('Allow'), 'id': 'a'+str(i) } - for i,x in enumerate(config.list_allowed_column_values()) if x != ''] + restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd'+str(i)} + for i, x in enumerate(config.list_denied_column_values()) if x != ''] + allow = [{'Element': x, 'type': _('Allow'), 'id': 'a'+str(i)} + for i, x in enumerate(config.list_allowed_column_values()) if x != ''] json_dumps = restrict + allow elif res_type == 2: # Tags per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: - usr = ub.session.query(ub.User).filter(ub.User.id == usr_id).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == user_id).first() else: usr = current_user - restrict = [{'Element': x, 'type':_('Deny'), 'id': 'd'+str(i) } - for i,x in enumerate(usr.list_denied_tags()) if x != '' ] - allow = [{'Element': x, 'type':_('Allow'), 'id': 'a'+str(i) } - for i,x in enumerate(usr.list_allowed_tags()) if x != ''] + restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd'+str(i)} + for i, x in enumerate(usr.list_denied_tags()) if x != ''] + allow = [{'Element': x, 'type': _('Allow'), 'id': 'a'+str(i)} + for i, x in enumerate(usr.list_allowed_tags()) if x != ''] json_dumps = restrict + allow elif res_type == 3: # CustomC per user - usr_id = os.path.split(request.referrer)[-1] - if usr_id.isdigit() == True: - usr = ub.session.query(ub.User).filter(ub.User.id==usr_id).first() + if isinstance(user_id, int): + usr = ub.session.query(ub.User).filter(ub.User.id == user_id).first() else: usr = current_user - restrict = [{'Element': x, 'type':_('Deny'), 'id': 'd'+str(i) } - for i,x in enumerate(usr.list_denied_column_values()) if x != '' ] - allow = [{'Element': x, 'type':_('Allow'), 'id': 'a'+str(i) } - for i,x in enumerate(usr.list_allowed_column_values()) if x != ''] + restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd'+str(i)} + for i, x in enumerate(usr.list_denied_column_values()) if x != ''] + allow = [{'Element': x, 'type': _('Allow'), 'id': 'a'+str(i)} + for i, x in enumerate(usr.list_allowed_column_values()) if x != ''] json_dumps = restrict + allow else: - json_dumps="" + json_dumps = "" js = json.dumps(json_dumps) response = make_response(js.replace("'", '"')) response.headers["Content-Type"] = "application/json; charset=utf-8" return response -@admi.route("/config", methods=["GET", "POST"]) +@admi.route("/basicconfig/pathchooser/") +@unconfigured +def config_pathchooser(): + if filepicker: + return pathchooser() + abort(403) + + +@admi.route("/ajax/pathchooser/") +@login_required +@admin_required +def ajax_pathchooser(): + return pathchooser() + + +def pathchooser(): + browse_for = "folder" + folder_only = request.args.get('folder', False) == "true" + file_filter = request.args.get('filter', "") + path = os.path.normpath(request.args.get('path', "")) + + if os.path.isfile(path): + oldfile = path + path = os.path.dirname(path) + else: + oldfile = "" + + absolute = False + + if os.path.isdir(path): + # if os.path.isabs(path): + cwd = os.path.realpath(path) + absolute = True + # else: + # cwd = os.path.relpath(path) + else: + cwd = os.getcwd() + + cwd = os.path.normpath(os.path.realpath(cwd)) + parentdir = os.path.dirname(cwd) + if not absolute: + if os.path.realpath(cwd) == os.path.realpath("/"): + cwd = os.path.relpath(cwd) + else: + cwd = os.path.relpath(cwd) + os.path.sep + parentdir = os.path.relpath(parentdir) + os.path.sep + + if os.path.realpath(cwd) == os.path.realpath("/"): + parentdir = "" + + try: + folders = os.listdir(cwd) + except Exception: + folders = [] + + files = [] + # locale = get_locale() + for f in folders: + try: + data = {"name": f, "fullpath": os.path.join(cwd, f)} + data["sort"] = data["fullpath"].lower() + except Exception: + continue + + if os.path.isfile(os.path.join(cwd, f)): + if folder_only: + continue + if file_filter != "" and file_filter != f: + continue + data["type"] = "file" + data["size"] = os.path.getsize(os.path.join(cwd, f)) + + power = 0 + while (data["size"] >> 10) > 0.3: + power += 1 + data["size"] >>= 10 + units = ("", "K", "M", "G", "T") + data["size"] = str(data["size"]) + " " + units[power] + "Byte" + else: + data["type"] = "dir" + data["size"] = "" + + files.append(data) + + files = sorted(files, key=operator.itemgetter("type", "sort")) + + context = { + "cwd": cwd, + "files": files, + "parentdir": parentdir, + "type": browse_for, + "oldfile": oldfile, + "absolute": absolute, + } + return json.dumps(context) + + +@admi.route("/basicconfig", methods=["GET", "POST"]) @unconfigured def basic_configuration(): logout_user() if request.method == "POST": - return _configuration_update_helper() - return _configuration_result() + return _configuration_update_helper(configured=filepicker) + return _configuration_result(configured=filepicker) def _config_int(to_save, x, func=int): @@ -503,8 +673,8 @@ def _configuration_gdrive_helper(to_save): config.config_use_google_drive = False gdrive_secrets = {} - gdriveError = gdriveutils.get_error_text(gdrive_secrets) - if "config_use_google_drive" in to_save and not config.config_use_google_drive and not gdriveError: + gdrive_error = gdriveutils.get_error_text(gdrive_secrets) + if "config_use_google_drive" in to_save and not config.config_use_google_drive and not gdrive_error: with open(gdriveutils.CLIENT_SECRETS, 'r') as settings: gdrive_secrets = json.load(settings)['web'] if not gdrive_secrets: @@ -516,10 +686,11 @@ def _configuration_gdrive_helper(to_save): ) # always show google drive settings, but in case of error deny support - config.config_use_google_drive = (not gdriveError) and ("config_use_google_drive" in to_save) + config.config_use_google_drive = (not gdrive_error) and ("config_use_google_drive" in to_save) if _config_string(to_save, "config_google_drive_folder"): gdriveutils.deleteDatabaseOnChange() - return gdriveError + return gdrive_error + def _configuration_oauth_helper(to_save): active_oauths = 0 @@ -542,20 +713,87 @@ def _configuration_oauth_helper(to_save): "active": element["active"]}) return reboot_required -def _configuration_logfile_helper(to_save, gdriveError): + +def _configuration_logfile_helper(to_save, gdrive_error): reboot_required = False reboot_required |= _config_int(to_save, "config_log_level") reboot_required |= _config_string(to_save, "config_logfile") if not logger.is_valid_logfile(config.config_logfile): - return reboot_required, _configuration_result(_('Logfile Location is not Valid, Please Enter Correct Path'), gdriveError) + return reboot_required, \ + _configuration_result(_('Logfile Location is not Valid, Please Enter Correct Path'), gdrive_error) reboot_required |= _config_checkbox_int(to_save, "config_access_log") reboot_required |= _config_string(to_save, "config_access_logfile") if not logger.is_valid_logfile(config.config_access_logfile): - return reboot_required, _configuration_result(_('Access Logfile Location is not Valid, Please Enter Correct Path'), gdriveError) + return reboot_required, \ + _configuration_result(_('Access Logfile Location is not Valid, Please Enter Correct Path'), gdrive_error) return reboot_required, None -def _configuration_ldap_helper(to_save, gdriveError): + +def _configuration_ldap_check(reboot_required, to_save, gdrive_error): + if not config.config_ldap_provider_url \ + or not config.config_ldap_port \ + or not config.config_ldap_dn \ + or not config.config_ldap_user_object: + return reboot_required, _configuration_result(_('Please Enter a LDAP Provider, ' + 'Port, DN and User Object Identifier'), gdrive_error) + if config.config_ldap_authentication > constants.LDAP_AUTH_ANONYMOUS: + if config.config_ldap_authentication > constants.LDAP_AUTH_UNAUTHENTICATE: + if not config.config_ldap_serv_username or not bool(config.config_ldap_serv_password): + return reboot_required, _configuration_result('Please Enter a LDAP Service Account and Password', + gdrive_error) + else: + if not config.config_ldap_serv_username: + return reboot_required, _configuration_result('Please Enter a LDAP Service Account', gdrive_error) + + if config.config_ldap_group_object_filter: + if config.config_ldap_group_object_filter.count("%s") != 1: + return reboot_required, \ + _configuration_result(_('LDAP Group Object Filter Needs to Have One "%s" Format Identifier'), + gdrive_error) + if config.config_ldap_group_object_filter.count("(") != config.config_ldap_group_object_filter.count(")"): + return reboot_required, _configuration_result(_('LDAP Group Object Filter Has Unmatched Parenthesis'), + gdrive_error) + if config.config_ldap_group_object_filter: + if config.config_ldap_group_object_filter.count("%s") != 1: + return reboot_required, \ + _configuration_result(_('LDAP Group Object Filter Needs to Have One "%s" Format Identifier'), + gdrive_error) + if config.config_ldap_group_object_filter.count("(") != config.config_ldap_group_object_filter.count(")"): + return reboot_required, _configuration_result(_('LDAP Group Object Filter Has Unmatched Parenthesis'), + gdrive_error) + + if config.config_ldap_user_object.count("%s") != 1: + return reboot_required, \ + _configuration_result(_('LDAP User Object Filter needs to Have One "%s" Format Identifier'), + gdrive_error) + if config.config_ldap_user_object.count("(") != config.config_ldap_user_object.count(")"): + return reboot_required, _configuration_result(_('LDAP User Object Filter Has Unmatched Parenthesis'), + gdrive_error) + + if to_save["ldap_import_user_filter"] == '0': + config.config_ldap_member_user_object = "" + else: + if config.config_ldap_member_user_object.count("%s") != 1: + return reboot_required, \ + _configuration_result(_('LDAP Member User Filter needs to Have One "%s" Format Identifier'), + gdrive_error) + if config.config_ldap_member_user_object.count("(") != config.config_ldap_member_user_object.count(")"): + return reboot_required, _configuration_result(_('LDAP Member User Filter Has Unmatched Parenthesis'), + gdrive_error) + + if config.config_ldap_cacert_path or config.config_ldap_cert_path or config.config_ldap_key_path: + if not (os.path.isfile(config.config_ldap_cacert_path) and + os.path.isfile(config.config_ldap_cert_path) and + os.path.isfile(config.config_ldap_key_path)): + return reboot_required, \ + _configuration_result(_('LDAP CACertificate, Certificate or Key Location is not Valid, ' + 'Please Enter Correct Path'), + gdrive_error) + return reboot_required, None + + +def _configuration_ldap_helper(to_save, gdrive_error): reboot_required = False reboot_required |= _config_string(to_save, "config_ldap_provider_url") reboot_required |= _config_int(to_save, "config_ldap_port") @@ -565,84 +803,66 @@ def _configuration_ldap_helper(to_save, gdriveError): reboot_required |= _config_string(to_save, "config_ldap_user_object") reboot_required |= _config_string(to_save, "config_ldap_group_object_filter") reboot_required |= _config_string(to_save, "config_ldap_group_members_field") + reboot_required |= _config_string(to_save, "config_ldap_member_user_object") reboot_required |= _config_checkbox(to_save, "config_ldap_openldap") reboot_required |= _config_int(to_save, "config_ldap_encryption") + reboot_required |= _config_string(to_save, "config_ldap_cacert_path") reboot_required |= _config_string(to_save, "config_ldap_cert_path") + reboot_required |= _config_string(to_save, "config_ldap_key_path") _config_string(to_save, "config_ldap_group_name") if "config_ldap_serv_password" in to_save and to_save["config_ldap_serv_password"] != "": reboot_required |= 1 config.set_from_dictionary(to_save, "config_ldap_serv_password", base64.b64encode, encode='UTF-8') config.save() - if not config.config_ldap_provider_url \ - or not config.config_ldap_port \ - or not config.config_ldap_dn \ - or not config.config_ldap_user_object: - return reboot_required, _configuration_result(_('Please Enter a LDAP Provider, ' - 'Port, DN and User Object Identifier'), gdriveError) - - if config.config_ldap_authentication > constants.LDAP_AUTH_ANONYMOUS: - if config.config_ldap_authentication > constants.LDAP_AUTH_UNAUTHENTICATE: - if not config.config_ldap_serv_username or not bool(config.config_ldap_serv_password): - return reboot_required, _configuration_result('Please Enter a LDAP Service Account and Password', gdriveError) - else: - if not config.config_ldap_serv_username: - return reboot_required, _configuration_result('Please Enter a LDAP Service Account', gdriveError) - - if config.config_ldap_group_object_filter: - if config.config_ldap_group_object_filter.count("%s") != 1: - return reboot_required, _configuration_result(_('LDAP Group Object Filter Needs to Have One "%s" Format Identifier'), - gdriveError) - if config.config_ldap_group_object_filter.count("(") != config.config_ldap_group_object_filter.count(")"): - return reboot_required, _configuration_result(_('LDAP Group Object Filter Has Unmatched Parenthesis'), - gdriveError) - - if config.config_ldap_user_object.count("%s") != 1: - return reboot_required, _configuration_result(_('LDAP User Object Filter needs to Have One "%s" Format Identifier'), - gdriveError) - if config.config_ldap_user_object.count("(") != config.config_ldap_user_object.count(")"): - return reboot_required, _configuration_result(_('LDAP User Object Filter Has Unmatched Parenthesis'), - gdriveError) - - if config.config_ldap_cert_path and not os.path.isdir(config.config_ldap_cert_path): - return reboot_required, _configuration_result(_('LDAP Certificate Location is not Valid, Please Enter Correct Path'), - gdriveError) - return reboot_required, None + return _configuration_ldap_check(reboot_required, to_save, gdrive_error) -def _configuration_update_helper(): +def _configuration_update_helper(configured): reboot_required = False db_change = False to_save = request.form.to_dict() - gdriveError = None + gdrive_error = None - to_save['config_calibre_dir'] = re.sub('[\\/]metadata\.db$', '', to_save['config_calibre_dir'], flags=re.IGNORECASE) + to_save['config_calibre_dir'] = re.sub(r'[\\/]metadata\.db$', + '', + to_save['config_calibre_dir'], + flags=re.IGNORECASE) try: db_change |= _config_string(to_save, "config_calibre_dir") - # Google drive setup - gdriveError = _configuration_gdrive_helper(to_save) + # gdrive_error drive setup + gdrive_error = _configuration_gdrive_helper(to_save) reboot_required |= _config_int(to_save, "config_port") reboot_required |= _config_string(to_save, "config_keyfile") if config.config_keyfile and not os.path.isfile(config.config_keyfile): - return _configuration_result(_('Keyfile Location is not Valid, Please Enter Correct Path'), gdriveError) + return _configuration_result(_('Keyfile Location is not Valid, Please Enter Correct Path'), + gdrive_error, + configured) reboot_required |= _config_string(to_save, "config_certfile") if config.config_certfile and not os.path.isfile(config.config_certfile): - return _configuration_result(_('Certfile Location is not Valid, Please Enter Correct Path'), gdriveError) + return _configuration_result(_('Certfile Location is not Valid, Please Enter Correct Path'), + gdrive_error, + configured) _config_checkbox_int(to_save, "config_uploading") - _config_checkbox_int(to_save, "config_anonbrowse") + # Reboot on config_anonbrowse with enabled ldap, as decoraters are changed in this case + reboot_required |= (_config_checkbox_int(to_save, "config_anonbrowse") + and config.config_login_type == constants.LOGIN_LDAP) _config_checkbox_int(to_save, "config_public_reg") _config_checkbox_int(to_save, "config_register_email") reboot_required |= _config_checkbox_int(to_save, "config_kobo_sync") _config_int(to_save, "config_external_port") _config_checkbox_int(to_save, "config_kobo_proxy") - _config_string(to_save, "config_upload_formats") - constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip() for x in config.config_upload_formats.split(',')] + if "config_upload_formats" in to_save: + to_save["config_upload_formats"] = ','.join( + helper.uniq([x.lstrip().rstrip().lower() for x in to_save["config_upload_formats"].split(',')])) + _config_string(to_save, "config_upload_formats") + constants.EXTENSIONS_UPLOAD = config.config_upload_formats.split(',') _config_string(to_save, "config_calibre") _config_string(to_save, "config_converterpath") @@ -650,9 +870,9 @@ def _configuration_update_helper(): reboot_required |= _config_int(to_save, "config_login_type") - #LDAP configurator, + # LDAP configurator, if config.config_login_type == constants.LOGIN_LDAP: - reboot, message = _configuration_ldap_helper(to_save, gdriveError) + reboot, message = _configuration_ldap_helper(to_save, gdrive_error) if message: return message reboot_required |= reboot @@ -661,7 +881,7 @@ def _configuration_update_helper(): _config_checkbox(to_save, "config_remote_login") if not config.config_remote_login: - ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.token_type==0).delete() + ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.token_type == 0).delete() # Goodreads configuration _config_checkbox(to_save, "config_use_goodreads") @@ -682,7 +902,7 @@ def _configuration_update_helper(): if config.config_login_type == constants.LOGIN_OAUTH: reboot_required |= _configuration_oauth_helper(to_save) - reboot, message = _configuration_logfile_helper(to_save, gdriveError) + reboot, message = _configuration_logfile_helper(to_save, gdrive_error) if message: return message reboot_required |= reboot @@ -691,10 +911,10 @@ def _configuration_update_helper(): if "config_rarfile_location" in to_save: unrar_status = helper.check_unrar(config.config_rarfile_location) if unrar_status: - return _configuration_result(unrar_status, gdriveError) + return _configuration_result(unrar_status, gdrive_error, configured) except (OperationalError, InvalidRequestError): ub.session.rollback() - _configuration_result(_(u"Settings DB is not Writeable"), gdriveError) + _configuration_result(_(u"Settings DB is not Writeable"), gdrive_error, configured) try: metadata_db = os.path.join(config.config_calibre_dir, "metadata.db") @@ -702,11 +922,13 @@ def _configuration_update_helper(): gdriveutils.downloadFile(None, "metadata.db", metadata_db) db_change = True except Exception as e: - return _configuration_result('%s' % e, gdriveError) + return _configuration_result('%s' % e, gdrive_error, configured) if db_change: if not calibre_db.setup_db(config, ub.app_DB_path): - return _configuration_result(_('DB Location is not Valid, Please Enter Correct Path'), gdriveError) + return _configuration_result(_('DB Location is not Valid, Please Enter Correct Path'), + gdrive_error, + configured) if not os.access(os.path.join(config.config_calibre_dir, "metadata.db"), os.W_OK): flash(_(u"DB is not Writeable"), category="warning") @@ -715,16 +937,16 @@ def _configuration_update_helper(): if reboot_required: web_server.stop(True) - return _configuration_result(None, gdriveError) + return _configuration_result(None, gdrive_error, configured) -def _configuration_result(error_flash=None, gdriveError=None): +def _configuration_result(error_flash=None, gdrive_error=None, configured=True): gdrive_authenticate = not is_gdrive_ready() gdrivefolders = [] - if gdriveError is None: - gdriveError = gdriveutils.get_error_text() - if gdriveError: - gdriveError = _(gdriveError) + if gdrive_error is None: + gdrive_error = gdriveutils.get_error_text() + if gdrive_error: + gdrive_error = _(gdrive_error) else: # if config.config_use_google_drive and\ if not gdrive_authenticate and gdrive_support: @@ -737,14 +959,20 @@ def _configuration_result(error_flash=None, gdriveError=None): flash(error_flash, category="error") show_login_button = False - return render_title_template("config_edit.html", config=config, provider=oauthblueprints, - show_back_button=show_back_button, show_login_button=show_login_button, + return render_title_template("config_edit.html", + config=config, + provider=oauthblueprints, + show_back_button=show_back_button, + show_login_button=show_login_button, show_authenticate_google_drive=gdrive_authenticate, - gdriveError=gdriveError, gdrivefolders=gdrivefolders, feature_support=feature_support, + filepicker=configured, + gdriveError=gdrive_error, + gdrivefolders=gdrivefolders, + feature_support=feature_support, title=_(u"Basic Configuration"), page="config") -def _handle_new_user(to_save, content,languages, translations, kobo_support): +def _handle_new_user(to_save, content, languages, translations, kobo_support): content.default_language = to_save["default_language"] # content.mature_content = "Show_mature_content" in to_save content.locale = to_save.get("locale", content.locale) @@ -795,22 +1023,36 @@ def _handle_new_user(to_save, content,languages, translations, kobo_support): ub.session.rollback() flash(_(u"Settings DB is not Writeable"), category="error") +def delete_user(content): + if ub.session.query(ub.User).filter(ub.User.role.op('&')(constants.ROLE_ADMIN) == constants.ROLE_ADMIN, + ub.User.id != content.id).count(): + ub.session.query(ub.User).filter(ub.User.id == content.id).delete() + ub.session_commit() + flash(_(u"User '%(nick)s' deleted", nick=content.nickname), category="success") + return redirect(url_for('admin.admin')) + else: + flash(_(u"No admin user remaining, can't delete user", nick=content.nickname), category="error") + return redirect(url_for('admin.admin')) -def _handle_edit_user(to_save, content,languages, translations, kobo_support, downloads): + +def save_edited_user(content): + try: + ub.session_commit() + flash(_(u"User '%(nick)s' updated", nick=content.nickname), category="success") + except IntegrityError: + ub.session.rollback() + flash(_(u"An unknown error occured."), category="error") + except OperationalError: + ub.session.rollback() + flash(_(u"Settings DB is not Writeable"), category="error") + + +def _handle_edit_user(to_save, content, languages, translations, kobo_support): if "delete" in to_save: - if ub.session.query(ub.User).filter(ub.User.role.op('&')(constants.ROLE_ADMIN) == constants.ROLE_ADMIN, - ub.User.id != content.id).count(): - ub.session.query(ub.User).filter(ub.User.id == content.id).delete() - ub.session.commit() - flash(_(u"User '%(nick)s' deleted", nick=content.nickname), category="success") - return redirect(url_for('admin.admin')) - else: - flash(_(u"No admin user remaining, can't delete user", nick=content.nickname), category="error") - return redirect(url_for('admin.admin')) + return delete_user(content) else: if not ub.session.query(ub.User).filter(ub.User.role.op('&')(constants.ROLE_ADMIN) == constants.ROLE_ADMIN, - ub.User.id != content.id).count() and \ - not 'admin_role' in to_save: + ub.User.id != content.id).count() and 'admin_role' not in to_save: flash(_(u"No admin user remaining, can't remove admin role", nick=content.nickname), category="error") return redirect(url_for('admin.admin')) @@ -824,12 +1066,12 @@ def _handle_edit_user(to_save, content,languages, translations, kobo_support, do content.role &= ~constants.ROLE_ANONYMOUS val = [int(k[5:]) for k in to_save if k.startswith('show_')] - sidebar = ub.get_sidebar_config() + sidebar = get_sidebar_config() for element in sidebar: value = element['visibility'] if value in val and not content.check_visibility(value): content.sidebar_view |= value - elif not value in val and content.check_visibility(value): + elif value not in val and content.check_visibility(value): content.sidebar_view &= ~value if "Show_detail_random" in to_save: @@ -855,7 +1097,6 @@ def _handle_edit_user(to_save, content,languages, translations, kobo_support, do kobo_support=kobo_support, new_user=0, content=content, - downloads=downloads, registered_oauth=oauth_check, title=_(u"Edit User %(nick)s", nick=content.nickname), page="edituser") if "nickname" in to_save and to_save["nickname"] != content.nickname: @@ -869,7 +1110,6 @@ def _handle_edit_user(to_save, content,languages, translations, kobo_support, do languages=languages, mail_configured=config.get_mail_server_configured(), new_user=0, content=content, - downloads=downloads, registered_oauth=oauth_check, kobo_support=kobo_support, title=_(u"Edit User %(nick)s", nick=content.nickname), @@ -877,15 +1117,7 @@ def _handle_edit_user(to_save, content,languages, translations, kobo_support, do if "kindle_mail" in to_save and to_save["kindle_mail"] != content.kindle_mail: content.kindle_mail = to_save["kindle_mail"] - try: - ub.session.commit() - flash(_(u"User '%(nick)s' updated", nick=content.nickname), category="success") - except IntegrityError: - ub.session.rollback() - flash(_(u"An unknown error occured."), category="error") - except OperationalError: - ub.session.rollback() - flash(_(u"Settings DB is not Writeable"), category="error") + return save_edited_user(content) @admi.route("/admin/user/new", methods=["GET", "POST"]) @@ -958,28 +1190,20 @@ def update_mailsettings(): @admin_required def edit_user(user_id): content = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() # type: ub.User - if not content: + if not content or (not config.config_anonbrowse and content.nickname == "Guest"): flash(_(u"User not found"), category="error") return redirect(url_for('admin.admin')) - downloads = list() languages = calibre_db.speaking_language() translations = babel.list_translations() + [LC('en')] kobo_support = feature_support['kobo'] and config.config_kobo_sync - for book in content.downloads: - downloadbook = calibre_db.get_book(book.book_id) - if downloadbook: - downloads.append(downloadbook) - else: - ub.delete_download(book.book_id) if request.method == "POST": to_save = request.form.to_dict() - _handle_edit_user(to_save, content, languages, translations, kobo_support, downloads) + _handle_edit_user(to_save, content, languages, translations, kobo_support) return render_title_template("user_edit.html", translations=translations, languages=languages, new_user=0, content=content, - downloads=downloads, registered_oauth=oauth_check, mail_configured=config.get_mail_server_configured(), kobo_support=kobo_support, @@ -1008,9 +1232,8 @@ def reset_user_password(user_id): @login_required @admin_required def view_logfile(): - logfiles = {} - logfiles[0] = logger.get_logfile(config.config_logfile) - logfiles[1] = logger.get_accesslogfile(config.config_access_logfile) + logfiles = {0: logger.get_logfile(config.config_logfile), + 1: logger.get_accesslogfile(config.config_access_logfile)} return render_title_template("logviewer.html", title=_(u"Logfile viewer"), accesslog_enable=config.config_access_log, @@ -1035,11 +1258,37 @@ def send_logfile(logtype): return "" +@admi.route("/admin/logdownload/") +@login_required +@admin_required +def download_log(logtype): + if logtype == 0: + file_name = logger.get_logfile(config.config_logfile) + elif logtype == 1: + file_name = logger.get_accesslogfile(config.config_access_logfile) + else: + abort(404) + if logger.is_valid_logfile(file_name): + return debug_info.assemble_logfiles(file_name) + abort(404) + + +@admi.route("/admin/debug") +@login_required +@admin_required +def download_debug(): + return debug_info.send_debug() + + @admi.route("/get_update_status", methods=['GET']) -@login_required_if_no_ano +@login_required +@admin_required def get_update_status(): - log.info(u"Update status requested") - return updater_thread.get_available_updates(request.method, locale=get_locale()) + if feature_support['updater']: + log.info(u"Update status requested") + return updater_thread.get_available_updates(request.method, locale=get_locale()) + else: + return '' @admi.route("/get_updater_status", methods=['GET', 'POST']) @@ -1047,32 +1296,156 @@ def get_update_status(): @admin_required def get_updater_status(): status = {} - if request.method == "POST": - commit = request.form.to_dict() - if "start" in commit and commit['start'] == 'True': - text = { - "1": _(u'Requesting update package'), - "2": _(u'Downloading update package'), - "3": _(u'Unzipping update package'), - "4": _(u'Replacing files'), - "5": _(u'Database connections are closed'), - "6": _(u'Stopping server'), - "7": _(u'Update finished, please press okay and reload page'), - "8": _(u'Update failed:') + u' ' + _(u'HTTP Error'), - "9": _(u'Update failed:') + u' ' + _(u'Connection error'), - "10": _(u'Update failed:') + u' ' + _(u'Timeout while establishing connection'), - "11": _(u'Update failed:') + u' ' + _(u'General error'), - "12": _(u'Update failed:') + u' ' + _(u'Update File Could Not be Saved in Temp Dir') - } - status['text'] = text - updater_thread.status = 0 - updater_thread.resume() - status['status'] = updater_thread.get_update_status() - elif request.method == "GET": + if feature_support['updater']: + if request.method == "POST": + commit = request.form.to_dict() + if "start" in commit and commit['start'] == 'True': + text = { + "1": _(u'Requesting update package'), + "2": _(u'Downloading update package'), + "3": _(u'Unzipping update package'), + "4": _(u'Replacing files'), + "5": _(u'Database connections are closed'), + "6": _(u'Stopping server'), + "7": _(u'Update finished, please press okay and reload page'), + "8": _(u'Update failed:') + u' ' + _(u'HTTP Error'), + "9": _(u'Update failed:') + u' ' + _(u'Connection error'), + "10": _(u'Update failed:') + u' ' + _(u'Timeout while establishing connection'), + "11": _(u'Update failed:') + u' ' + _(u'General error'), + "12": _(u'Update failed:') + u' ' + _(u'Update File Could Not be Saved in Temp Dir') + } + status['text'] = text + updater_thread.status = 0 + updater_thread.resume() + status['status'] = updater_thread.get_update_status() + elif request.method == "GET": + try: + status['status'] = updater_thread.get_update_status() + if status['status'] == -1: + status['status'] = 7 + except Exception: + status['status'] = 11 + return json.dumps(status) + return '' + + +def create_ldap_user(user, user_data, config): + imported = 0 + showtext = None + + user_login_field = extract_dynamic_field_from_filter(user, config.config_ldap_user_object) + + username = user_data[user_login_field][0].decode('utf-8') + # check for duplicate username + if ub.session.query(ub.User).filter(func.lower(ub.User.nickname) == username.lower()).first(): + # if ub.session.query(ub.User).filter(ub.User.nickname == username).first(): + log.warning("LDAP User %s Already in Database", user_data) + return imported, showtext + + kindlemail = '' + if 'mail' in user_data: + useremail = user_data['mail'][0].decode('utf-8') + if len(user_data['mail']) > 1: + kindlemail = user_data['mail'][1].decode('utf-8') + + else: + log.debug('No Mail Field Found in LDAP Response') + useremail = username + '@email.com' + # check for duplicate email + if ub.session.query(ub.User).filter(func.lower(ub.User.email) == useremail.lower()).first(): + log.warning("LDAP Email %s Already in Database", user_data) + return imported, showtext + + content = ub.User() + content.nickname = username + content.password = '' # dummy password which will be replaced by ldap one + content.email = useremail + content.kindle_mail = kindlemail + content.role = config.config_default_role + content.sidebar_view = config.config_default_show + content.allowed_tags = config.config_allowed_tags + content.denied_tags = config.config_denied_tags + content.allowed_column_value = config.config_allowed_column_value + content.denied_column_value = config.config_denied_column_value + ub.session.add(content) + try: + ub.session.commit() + imported = 1 + except Exception as e: + log.warning("Failed to create LDAP user: %s - %s", user, e) + ub.session.rollback() + showtext = _(u'Failed to Create at Least One LDAP User') + return imported, showtext + + +@admi.route('/import_ldap_users') +@login_required +@admin_required +def import_ldap_users(): + showtext = {} + try: + new_users = services.ldap.get_group_members(config.config_ldap_group_name) + except (services.ldap.LDAPException, TypeError, AttributeError, KeyError) as e: + log.debug_or_exception(e) + showtext['text'] = _(u'Error: %(ldaperror)s', ldaperror=e) + return json.dumps(showtext) + if not new_users: + log.debug('LDAP empty response') + showtext['text'] = _(u'Error: No user returned in response of LDAP server') + return json.dumps(showtext) + + imported = 0 + for username in new_users: + user = username.decode('utf-8') + if '=' in user: + # if member object field is empty take user object as filter + if config.config_ldap_member_user_object: + query_filter = config.config_ldap_member_user_object + else: + query_filter = config.config_ldap_user_object + try: + user_identifier = extract_user_identifier(user, query_filter) + except Exception as e: + log.warning(e) + continue + else: + user_identifier = user + query_filter = None try: - status['status'] = updater_thread.get_update_status() - if status['status'] == -1: - status['status'] = 7 - except Exception: - status['status'] = 11 - return json.dumps(status) + user_data = services.ldap.get_object_details(user=user_identifier, query_filter=query_filter) + except AttributeError as e: + log.debug_or_exception(e) + continue + if user_data: + success, txt = create_ldap_user(user, user_data, config) + # In case of error store text for showing it + if txt: + showtext['text'] = txt + imported += success + else: + log.warning("LDAP User: %s Not Found", user) + showtext['text'] = _(u'At Least One LDAP User Not Found in Database') + if not showtext: + showtext['text'] = _(u'{} User Successfully Imported'.format(imported)) + return json.dumps(showtext) + + +def extract_user_data_from_field(user, field): + match = re.search(field + r"=([\d\s\w-]+)", user, re.IGNORECASE | re.UNICODE) + if match: + return match.group(1) + else: + raise Exception("Could Not Parse LDAP User: {}".format(user)) + + +def extract_dynamic_field_from_filter(user, filtr): + match = re.search("([a-zA-Z0-9-]+)=%s", filtr, re.IGNORECASE | re.UNICODE) + if match: + return match.group(1) + else: + raise Exception("Could Not Parse LDAP Userfield: {}", user) + + +def extract_user_identifier(user, filtr): + dynamic_field = extract_dynamic_field_from_filter(user, filtr) + return extract_user_data_from_field(user, dynamic_field) diff --git a/cps/cli.py b/cps/cli.py index c94cb89d..07b719d2 100644 --- a/cps/cli.py +++ b/cps/cli.py @@ -45,6 +45,7 @@ parser.add_argument('-v', '--version', action='version', help='Shows version num version=version_info()) parser.add_argument('-i', metavar='ip-address', help='Server IP-Address to listen') parser.add_argument('-s', metavar='user:pass', help='Sets specific username to new password') +parser.add_argument('-f', action='store_true', help='Enables filepicker in unconfigured mode') args = parser.parse_args() if sys.version_info < (3, 0): @@ -109,4 +110,10 @@ if ipadress: sys.exit(1) # handle and check user password argument -user_password = args.s or None +user_credentials = args.s or None +if user_credentials and ":" not in user_credentials: + print("No valid username:password format") + sys.exit(3) + +# Handles enableing of filepicker +filepicker = args.f or None diff --git a/cps/comic.py b/cps/comic.py index e788fc44..c2b30197 100644 --- a/cps/comic.py +++ b/cps/comic.py @@ -18,27 +18,27 @@ from __future__ import division, print_function, unicode_literals import os -import io from . import logger, isoLanguages from .constants import BookMeta -try: - from PIL import Image as PILImage - use_PIL = True -except ImportError as e: - use_PIL = False - log = logger.create() +try: + from wand.image import Image + use_IM = True +except (ImportError, RuntimeError) as e: + use_IM = False + + try: from comicapi.comicarchive import ComicArchive, MetaDataStyle use_comic_meta = True try: from comicapi import __version__ as comic_version - except (ImportError): + except ImportError: comic_version = '' except (ImportError, LookupError) as e: log.debug('Cannot import comicapi, extracting comic metadata will not work: %s', e) @@ -47,82 +47,88 @@ except (ImportError, LookupError) as e: try: import rarfile use_rarfile = True - except ImportError as e: + except (ImportError, SyntaxError) as e: log.debug('Cannot import rarfile, extracting cover files from rar files will not work: %s', e) use_rarfile = False use_comic_meta = False -def _cover_processing(tmp_file_name, img, extension): - if use_PIL: - # convert to jpg because calibre only supports jpg - if extension in ('.png', '.webp'): - imgc = PILImage.open(io.BytesIO(img)) - im = imgc.convert('RGB') - tmp_bytesio = io.BytesIO() - im.save(tmp_bytesio, format='JPEG') - img = tmp_bytesio.getvalue() +NO_JPEG_EXTENSIONS = ['.png', '.webp', '.bmp'] +COVER_EXTENSIONS = ['.png', '.webp', '.bmp', '.jpg', '.jpeg'] - prefix = os.path.dirname(tmp_file_name) - if img: - tmp_cover_name = prefix + '/cover.jpg' - image = open(tmp_cover_name, 'wb') - image.write(img) - image.close() - else: - tmp_cover_name = None +def _cover_processing(tmp_file_name, img, extension): + tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg') + if use_IM: + # convert to jpg because calibre only supports jpg + if extension in NO_JPEG_EXTENSIONS: + with Image(filename=tmp_file_name) as imgc: + imgc.format = 'jpeg' + imgc.transform_colorspace('rgb') + imgc.save(tmp_cover_name) + return tmp_cover_name + + if not img: + return None + + with open(tmp_cover_name, 'wb') as f: + f.write(img) return tmp_cover_name - -def _extractCover(tmp_file_name, original_file_extension, rarExceutable): - cover_data = extension = None - if use_comic_meta: - archive = ComicArchive(tmp_file_name) - for index, name in enumerate(archive.getPageNameList()): +def _extract_Cover_from_archive(original_file_extension, tmp_file_name, rarExecutable): + cover_data = None + if original_file_extension.upper() == '.CBZ': + cf = zipfile.ZipFile(tmp_file_name) + for name in cf.namelist(): ext = os.path.splitext(name) if len(ext) > 1: extension = ext[1].lower() - if extension in ('.jpg', '.jpeg', '.png', '.webp'): - cover_data = archive.getPage(index) + if extension in COVER_EXTENSIONS: + cover_data = cf.read(name) break - else: - if original_file_extension.upper() == '.CBZ': - cf = zipfile.ZipFile(tmp_file_name) - for name in cf.namelist(): - ext = os.path.splitext(name) - if len(ext) > 1: - extension = ext[1].lower() - if extension in ('.jpg', '.jpeg', '.png', '.webp'): - cover_data = cf.read(name) - break - elif original_file_extension.upper() == '.CBT': - cf = tarfile.TarFile(tmp_file_name) + elif original_file_extension.upper() == '.CBT': + cf = tarfile.TarFile(tmp_file_name) + for name in cf.getnames(): + ext = os.path.splitext(name) + if len(ext) > 1: + extension = ext[1].lower() + if extension in COVER_EXTENSIONS: + cover_data = cf.extractfile(name).read() + break + elif original_file_extension.upper() == '.CBR' and use_rarfile: + try: + rarfile.UNRAR_TOOL = rarExecutable + cf = rarfile.RarFile(tmp_file_name) for name in cf.getnames(): ext = os.path.splitext(name) if len(ext) > 1: extension = ext[1].lower() - if extension in ('.jpg', '.jpeg', '.png', '.webp'): - cover_data = cf.extractfile(name).read() + if extension in COVER_EXTENSIONS: + cover_data = cf.read(name) break - elif original_file_extension.upper() == '.CBR' and use_rarfile: - try: - rarfile.UNRAR_TOOL = rarExceutable - cf = rarfile.RarFile(tmp_file_name) - for name in cf.getnames(): - ext = os.path.splitext(name) - if len(ext) > 1: - extension = ext[1].lower() - if extension in ('.jpg', '.jpeg', '.png', '.webp'): - cover_data = cf.read(name) - break - except Exception as e: - log.debug('Rarfile failed with error: %s', e) + except Exception as e: + log.debug('Rarfile failed with error: %s', e) + return cover_data + + +def _extractCover(tmp_file_name, original_file_extension, rarExecutable): + cover_data = extension = None + if use_comic_meta: + archive = ComicArchive(tmp_file_name, rar_exe_path=rarExecutable) + for index, name in enumerate(archive.getPageNameList()): + ext = os.path.splitext(name) + if len(ext) > 1: + extension = ext[1].lower() + if extension in COVER_EXTENSIONS: + cover_data = archive.getPage(index) + break + else: + cover_data = _extract_Cover_from_archive(original_file_extension, tmp_file_name, rarExecutable) return _cover_processing(tmp_file_name, cover_data, extension) -def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rarExceutable): +def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rarExecutable): if use_comic_meta: - archive = ComicArchive(tmp_file_path, rar_exe_path=rarExceutable) + archive = ComicArchive(tmp_file_path, rar_exe_path=rarExecutable) if archive.seemsToBeAComicArchive(): if archive.hasMetadata(MetaDataStyle.CIX): style = MetaDataStyle.CIX @@ -134,21 +140,16 @@ def get_comic_info(tmp_file_path, original_file_name, original_file_extension, r # if style is not None: loadedMetadata = archive.readMetadata(style) - lang = loadedMetadata.language - if lang: - if len(lang) == 2: - loadedMetadata.language = isoLanguages.get(part1=lang).name - elif len(lang) == 3: - loadedMetadata.language = isoLanguages.get(part3=lang).name - else: - loadedMetadata.language = "" + lang = loadedMetadata.language or "" + loadedMetadata.language = isoLanguages.get_lang3(lang) return BookMeta( file_path=tmp_file_path, extension=original_file_extension, title=loadedMetadata.title or original_file_name, - author=" & ".join([credit["person"] for credit in loadedMetadata.credits if credit["role"] == "Writer"]) or u'Unknown', - cover=_extractCover(tmp_file_path, original_file_extension, rarExceutable), + author=" & ".join([credit["person"] + for credit in loadedMetadata.credits if credit["role"] == "Writer"]) or u'Unknown', + cover=_extractCover(tmp_file_path, original_file_extension, rarExecutable), description=loadedMetadata.comments or "", tags="", series=loadedMetadata.series or "", @@ -160,7 +161,7 @@ def get_comic_info(tmp_file_path, original_file_name, original_file_extension, r extension=original_file_extension, title=original_file_name, author=u'Unknown', - cover=_extractCover(tmp_file_path, original_file_extension, rarExceutable), + cover=_extractCover(tmp_file_path, original_file_extension, rarExecutable), description="", tags="", series="", diff --git a/cps/config_sql.py b/cps/config_sql.py index 3573abe7..60e17eb3 100644 --- a/cps/config_sql.py +++ b/cps/config_sql.py @@ -19,10 +19,10 @@ from __future__ import division, print_function, unicode_literals import os -import json import sys from sqlalchemy import exc, Column, String, Integer, SmallInteger, Boolean, BLOB, JSON +from sqlalchemy.exc import OperationalError from sqlalchemy.ext.declarative import declarative_base from . import constants, cli, logger, ub @@ -109,9 +109,12 @@ class _Settings(_Base): config_ldap_serv_username = Column(String, default='cn=admin,dc=example,dc=org') config_ldap_serv_password = Column(String, default="") config_ldap_encryption = Column(SmallInteger, default=0) + config_ldap_cacert_path = Column(String, default="") config_ldap_cert_path = Column(String, default="") + config_ldap_key_path = Column(String, default="") config_ldap_dn = Column(String, default='dc=example,dc=org') config_ldap_user_object = Column(String, default='uid=%s') + config_ldap_member_user_object = Column(String, default='') # config_ldap_openldap = Column(Boolean, default=True) config_ldap_group_object_filter = Column(String, default='(&(objectclass=posixGroup)(cn=%s))') config_ldap_group_members_field = Column(String, default='memberUid') @@ -143,15 +146,16 @@ class _ConfigSQL(object): self.load() change = False - if self.config_converterpath == None: + if self.config_converterpath == None: # pylint: disable=access-member-before-definition change = True self.config_converterpath = autodetect_calibre_binary() - if self.config_kepubifypath == None: + if self.config_kepubifypath == None: # pylint: disable=access-member-before-definition + change = True self.config_kepubifypath = autodetect_kepubify_binary() - if self.config_rarfile_location == None: + if self.config_rarfile_location == None: # pylint: disable=access-member-before-definition change = True self.config_rarfile_location = autodetect_unrar_binary() if change: @@ -178,7 +182,8 @@ class _ConfigSQL(object): return None return self.config_keyfile - def get_config_ipaddress(self): + @staticmethod + def get_config_ipaddress(): return cli.ipadress or "" def _has_role(self, role_flag): @@ -269,6 +274,14 @@ class _ConfigSQL(object): setattr(self, field, new_value) return True + def toDict(self): + storage = {} + for k, v in self.__dict__.items(): + if k[0] != '_' and not k.endswith("password") and not k.endswith("secret"): + storage[k] = v + return storage + + def load(self): '''Load all configuration values from the underlying storage.''' s = self._read_from_storage() # type: _Settings @@ -287,13 +300,18 @@ class _ConfigSQL(object): db_file = os.path.join(self.config_calibre_dir, 'metadata.db') have_metadata_db = os.path.isfile(db_file) self.db_configured = have_metadata_db - constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip() for x in self.config_upload_formats.split(',')] + constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip().lower() for x in self.config_upload_formats.split(',')] + # pylint: disable=access-member-before-definition logfile = logger.setup(self.config_logfile, self.config_log_level) if logfile != self.config_logfile: log.warning("Log path %s not valid, falling back to default", self.config_logfile) self.config_logfile = logfile self._session.merge(s) - self._session.commit() + try: + self._session.commit() + except OperationalError as e: + log.error('Database error: %s', e) + self._session.rollback() def save(self): '''Apply all configuration values to the underlying storage.''' @@ -307,7 +325,11 @@ class _ConfigSQL(object): log.debug("_ConfigSQL updating storage") self._session.merge(s) - self._session.commit() + try: + self._session.commit() + except OperationalError as e: + log.error('Database error: %s', e) + self._session.rollback() self.load() def invalidate(self, error=None): @@ -348,7 +370,10 @@ def _migrate_table(session, orm_class): changed = True if changed: - session.commit() + try: + session.commit() + except OperationalError: + session.rollback() def autodetect_calibre_binary(): diff --git a/cps/constants.py b/cps/constants.py index ccb0d8a8..200bec8d 100644 --- a/cps/constants.py +++ b/cps/constants.py @@ -21,7 +21,9 @@ import sys import os from collections import namedtuple +# if installed via pip this variable is set to true HOME_CONFIG = False +UPDATER_AVAILABLE = True # Base dir is parent of current file, necessary if called from different folder if sys.version_info < (3, 0): @@ -40,7 +42,7 @@ if HOME_CONFIG: os.makedirs(home_dir) CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', home_dir) else: - CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', BASE_DIR) + CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', BASE_DIR) ROLE_USER = 0 << 0 @@ -81,10 +83,11 @@ SIDEBAR_PUBLISHER = 1 << 12 SIDEBAR_RATING = 1 << 13 SIDEBAR_FORMAT = 1 << 14 SIDEBAR_ARCHIVED = 1 << 15 -# SIDEBAR_LIST = 1 << 16 +SIDEBAR_DOWNLOAD = 1 << 16 +SIDEBAR_LIST = 1 << 17 ADMIN_USER_ROLES = sum(r for r in ALL_ROLES.values()) & ~ROLE_ANONYMOUS -ADMIN_USER_SIDEBAR = (SIDEBAR_ARCHIVED << 1) - 1 +ADMIN_USER_SIDEBAR = (SIDEBAR_LIST << 1) - 1 UPDATE_STABLE = 0 << 0 AUTO_UPDATE_STABLE = 1 << 0 @@ -101,7 +104,7 @@ LDAP_AUTH_SIMPLE = 0 DEFAULT_MAIL_SERVER = "mail.example.org" -DEFAULT_PASSWORD = "admin123" +DEFAULT_PASSWORD = "admin123" # nosec # noqa DEFAULT_PORT = 8083 env_CALIBRE_PORT = os.environ.get("CALIBRE_PORT", DEFAULT_PORT) try: @@ -112,7 +115,8 @@ del env_CALIBRE_PORT EXTENSIONS_AUDIO = {'mp3', 'mp4', 'ogg', 'opus', 'wav', 'flac', 'm4a', 'm4b'} -EXTENSIONS_CONVERT = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2', 'lit', 'lrf', 'txt', 'htmlz', 'rtf', 'odt'] +EXTENSIONS_CONVERT_FROM = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2', 'lit', 'lrf', 'txt', 'htmlz', 'rtf', 'odt','cbz','cbr'] +EXTENSIONS_CONVERT_TO = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2', 'lit', 'lrf', 'txt', 'htmlz', 'rtf', 'odt'] EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr', 'cbz', 'cbt', 'djvu', 'prc', 'doc', 'docx', 'fb2', 'html', 'rtf', 'lit', 'odt', 'mp3', 'mp4', 'ogg', 'opus', 'wav', 'flac', 'm4a', 'm4b'} @@ -128,7 +132,7 @@ def selected_roles(dictionary): BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, description, tags, series, ' 'series_id, languages') -STABLE_VERSION = {'version': '0.6.9 Beta'} +STABLE_VERSION = {'version': '0.6.12 Beta'} NIGHTLY_VERSION = {} NIGHTLY_VERSION[0] = '$Format:%H$' diff --git a/cps/converter.py b/cps/converter.py index 01a6fbc7..2ff73666 100644 --- a/cps/converter.py +++ b/cps/converter.py @@ -19,7 +19,6 @@ from __future__ import division, print_function, unicode_literals import os import re -import sys from flask_babel import gettext as _ from . import config, logger diff --git a/cps/db.py b/cps/db.py index fb1d5c57..3eec6454 100644 --- a/cps/db.py +++ b/cps/db.py @@ -24,18 +24,17 @@ import re import ast import json from datetime import datetime -import threading from sqlalchemy import create_engine from sqlalchemy import Table, Column, ForeignKey, CheckConstraint from sqlalchemy import String, Integer, Boolean, TIMESTAMP, Float from sqlalchemy.orm import relationship, sessionmaker, scoped_session -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.exc import OperationalError +from sqlalchemy.orm.collections import InstrumentedList +from sqlalchemy.ext.declarative import declarative_base, DeclarativeMeta from sqlalchemy.pool import StaticPool -from flask_login import current_user from sqlalchemy.sql.expression import and_, true, false, text, func, or_ from sqlalchemy.ext.associationproxy import association_proxy +from flask_login import current_user from babel import Locale as LC from babel.core import UnknownLocaleError from flask_babel import gettext as _ @@ -43,12 +42,15 @@ from flask_babel import gettext as _ from . import logger, ub, isoLanguages from .pagination import Pagination +from weakref import WeakSet + try: import unidecode use_unidecode = True except ImportError: use_unidecode = False +log = logger.create() cc_exceptions = ['datetime', 'comments', 'composite', 'series'] cc_classes = {} @@ -56,34 +58,34 @@ cc_classes = {} Base = declarative_base() books_authors_link = Table('books_authors_link', Base.metadata, - Column('book', Integer, ForeignKey('books.id'), primary_key=True), - Column('author', Integer, ForeignKey('authors.id'), primary_key=True) - ) + Column('book', Integer, ForeignKey('books.id'), primary_key=True), + Column('author', Integer, ForeignKey('authors.id'), primary_key=True) + ) books_tags_link = Table('books_tags_link', Base.metadata, - Column('book', Integer, ForeignKey('books.id'), primary_key=True), - Column('tag', Integer, ForeignKey('tags.id'), primary_key=True) - ) + Column('book', Integer, ForeignKey('books.id'), primary_key=True), + Column('tag', Integer, ForeignKey('tags.id'), primary_key=True) + ) books_series_link = Table('books_series_link', Base.metadata, - Column('book', Integer, ForeignKey('books.id'), primary_key=True), - Column('series', Integer, ForeignKey('series.id'), primary_key=True) - ) + Column('book', Integer, ForeignKey('books.id'), primary_key=True), + Column('series', Integer, ForeignKey('series.id'), primary_key=True) + ) books_ratings_link = Table('books_ratings_link', Base.metadata, - Column('book', Integer, ForeignKey('books.id'), primary_key=True), - Column('rating', Integer, ForeignKey('ratings.id'), primary_key=True) - ) + Column('book', Integer, ForeignKey('books.id'), primary_key=True), + Column('rating', Integer, ForeignKey('ratings.id'), primary_key=True) + ) books_languages_link = Table('books_languages_link', Base.metadata, - Column('book', Integer, ForeignKey('books.id'), primary_key=True), - Column('lang_code', Integer, ForeignKey('languages.id'), primary_key=True) - ) + Column('book', Integer, ForeignKey('books.id'), primary_key=True), + Column('lang_code', Integer, ForeignKey('languages.id'), primary_key=True) + ) books_publishers_link = Table('books_publishers_link', Base.metadata, - Column('book', Integer, ForeignKey('books.id'), primary_key=True), - Column('publisher', Integer, ForeignKey('publishers.id'), primary_key=True) - ) + Column('book', Integer, ForeignKey('books.id'), primary_key=True), + Column('publisher', Integer, ForeignKey('publishers.id'), primary_key=True) + ) class Identifiers(Base): @@ -117,6 +119,12 @@ class Identifiers(Base): return u"Google Books" elif format_type == "kobo": return u"Kobo" + elif format_type == "litres": + return u"ЛитРес" + elif format_type == "issn": + return u"ISSN" + elif format_type == "isfdb": + return u"ISFDB" if format_type == "lubimyczytac": return u"Lubimyczytac" else: @@ -125,9 +133,9 @@ class Identifiers(Base): def __repr__(self): format_type = self.type.lower() if format_type == "amazon" or format_type == "asin": - return u"https://amzn.com/{0}".format(self.val) + return u"https://amazon.com/dp/{0}".format(self.val) elif format_type.startswith('amazon_'): - return u"https://amazon.{0}/{1}".format(format_type[7:], self.val) + return u"https://amazon.{0}/dp/{1}".format(format_type[7:], self.val) elif format_type == "isbn": return u"https://www.worldcat.org/isbn/{0}".format(self.val) elif format_type == "doi": @@ -141,11 +149,15 @@ class Identifiers(Base): elif format_type == "kobo": return u"https://www.kobo.com/ebook/{0}".format(self.val) elif format_type == "lubimyczytac": - return u" https://lubimyczytac.pl/ksiazka/{0}".format(self.val) - elif format_type == "url": - return u"{0}".format(self.val) + return u"https://lubimyczytac.pl/ksiazka/{0}/ksiazka".format(self.val) + elif format_type == "litres": + return u"https://www.litres.ru/{0}".format(self.val) + elif format_type == "issn": + return u"https://portal.issn.org/resource/ISSN/{0}".format(self.val) + elif format_type == "isfdb": + return u"http://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val) else: - return u"" + return u"{0}".format(self.val) class Comments(Base): @@ -159,6 +171,9 @@ class Comments(Base): self.text = text self.book = book + def get(self): + return self.text + def __repr__(self): return u"".format(self.text) @@ -172,6 +187,9 @@ class Tags(Base): def __init__(self, name): self.name = name + def get(self): + return self.name + def __repr__(self): return u"".format(self.name) @@ -189,6 +207,9 @@ class Authors(Base): self.sort = sort self.link = link + def get(self): + return self.name + def __repr__(self): return u"".format(self.name, self.sort, self.link) @@ -204,6 +225,9 @@ class Series(Base): self.name = name self.sort = sort + def get(self): + return self.name + def __repr__(self): return u"".format(self.name, self.sort) @@ -217,6 +241,9 @@ class Ratings(Base): def __init__(self, rating): self.rating = rating + def get(self): + return self.rating + def __repr__(self): return u"".format(self.rating) @@ -230,6 +257,12 @@ class Languages(Base): def __init__(self, lang_code): self.lang_code = lang_code + def get(self): + if self.language_name: + return self.language_name + else: + return self.lang_code + def __repr__(self): return u"".format(self.lang_code) @@ -245,13 +278,16 @@ class Publishers(Base): self.name = name self.sort = sort + def get(self): + return self.name + def __repr__(self): return u"".format(self.name, self.sort) class Data(Base): __tablename__ = 'data' - __table_args__ = {'schema':'calibre'} + __table_args__ = {'schema': 'calibre'} id = Column(Integer, primary_key=True) book = Column(Integer, ForeignKey('books.id'), nullable=False) @@ -265,6 +301,10 @@ class Data(Base): self.uncompressed_size = uncompressed_size self.name = name + # ToDo: Check + def get(self): + return self.name + def __repr__(self): return u"".format(self.book, self.format, self.uncompressed_size, self.name) @@ -272,14 +312,14 @@ class Data(Base): class Books(Base): __tablename__ = 'books' - DEFAULT_PUBDATE = "0101-01-01 00:00:00+00:00" + DEFAULT_PUBDATE = datetime(101, 1, 1, 0, 0, 0, 0) # ("0101-01-01 00:00:00+00:00") id = Column(Integer, primary_key=True, autoincrement=True) title = Column(String(collation='NOCASE'), nullable=False, default='Unknown') sort = Column(String(collation='NOCASE')) author_sort = Column(String(collation='NOCASE')) timestamp = Column(TIMESTAMP, default=datetime.utcnow) - pubdate = Column(String) # , default=datetime.utcnow) + pubdate = Column(TIMESTAMP, default=DEFAULT_PUBDATE) series_index = Column(String, nullable=False, default="1.0") last_modified = Column(TIMESTAMP, default=datetime.utcnow) path = Column(String, default="", nullable=False) @@ -309,7 +349,8 @@ class Books(Base): self.series_index = series_index self.last_modified = last_modified self.path = path - self.has_cover = has_cover + self.has_cover = (has_cover != None) + def __repr__(self): return u"".format(self.title, self.sort, self.author_sort, @@ -320,6 +361,7 @@ class Books(Base): def atom_timestamp(self): return (self.timestamp.strftime('%Y-%m-%dT%H:%M:%S+00:00') or '') + class Custom_Columns(Base): __tablename__ = 'custom_columns' @@ -340,46 +382,73 @@ class Custom_Columns(Base): return display_dict -class CalibreDB(threading.Thread): +class AlchemyEncoder(json.JSONEncoder): - def __init__(self): - threading.Thread.__init__(self) - self.engine = None - self.session = None - self.queue = None - self.log = None - self.config = None - - def add_queue(self,queue): - self.queue = queue - self.log = logger.create() - - def run(self): - while True: - i = self.queue.get() - if i == 'dummy': - self.queue.task_done() - break - if i['task'] == 'add_format': - cur_book = self.session.query(Books).filter(Books.id == i['id']).first() - cur_book.data.append(i['format']) + def default(self, o): + if isinstance(o.__class__, DeclarativeMeta): + # an SQLAlchemy class + fields = {} + for field in [x for x in dir(o) if not x.startswith('_') and x != 'metadata']: + if field == 'books': + continue + data = o.__getattribute__(field) try: - # db.session.merge(cur_book) - self.session.commit() - except OperationalError as e: - self.session.rollback() - self.log.error("Database error: %s", e) - # self._handleError(_(u"Database error: %(error)s.", error=e)) - # return - self.queue.task_done() + if isinstance(data, str): + data = data.replace("'", "\'") + elif isinstance(data, InstrumentedList): + el = list() + for ele in data: + if ele.get: + el.append(ele.get()) + else: + el.append(json.dumps(ele, cls=AlchemyEncoder)) + if field == 'authors': + data = " & ".join(el) + else: + data = ",".join(el) + if data == '[]': + data = "" + else: + json.dumps(data) + fields[field] = data + except Exception: + fields[field] = "" + # a json-encodable dict + return fields + + return json.JSONEncoder.default(self, o) - def stop(self): - self.queue.put('dummy') +class CalibreDB(): + _init = False + engine = None + config = None + session_factory = None + # This is a WeakSet so that references here don't keep other CalibreDB + # instances alive once they reach the end of their respective scopes + instances = WeakSet() - def setup_db(self, config, app_db_path): - self.config = config - self.dispose() + def __init__(self, expire_on_commit=True): + """ Initialize a new CalibreDB session + """ + self.session = None + if self._init: + self.initSession(expire_on_commit) + + self.instances.add(self) + + + def initSession(self, expire_on_commit=True): + self.session = self.session_factory() + self.session.expire_on_commit = expire_on_commit + self.update_title_sort(self.config) + + @classmethod + def setup_db(cls, config, app_db_path): + cls.config = config + cls.dispose() + + # toDo: if db changed -> delete shelfs, delete download books, delete read boks, kobo sync?? if not config.config_calibre_dir: config.invalidate() @@ -391,22 +460,21 @@ class CalibreDB(threading.Thread): return False try: - self.engine = create_engine('sqlite://', - echo=False, - isolation_level="SERIALIZABLE", - connect_args={'check_same_thread': False}, - poolclass=StaticPool) - self.engine.execute("attach database '{}' as calibre;".format(dbpath)) - self.engine.execute("attach database '{}' as app_settings;".format(app_db_path)) + cls.engine = create_engine('sqlite://', + echo=False, + isolation_level="SERIALIZABLE", + connect_args={'check_same_thread': False}, + poolclass=StaticPool) + cls.engine.execute("attach database '{}' as calibre;".format(dbpath)) + cls.engine.execute("attach database '{}' as app_settings;".format(app_db_path)) - conn = self.engine.connect() + conn = cls.engine.connect() # conn.text_factory = lambda b: b.decode(errors = 'ignore') possible fix for #1302 except Exception as e: config.invalidate(e) return False config.db_configured = True - self.update_title_sort(config, conn.connection) if not cc_classes: cc = conn.execute("SELECT id, datatype FROM custom_columns") @@ -421,12 +489,12 @@ class CalibreDB(threading.Thread): 'book': Column(Integer, ForeignKey('books.id'), primary_key=True), 'map_value': Column('value', Integer, - ForeignKey('custom_column_' + - str(row.id) + '.id'), - primary_key=True), + ForeignKey('custom_column_' + + str(row.id) + '.id'), + primary_key=True), 'extra': Column(Float), - 'asoc' : relationship('custom_column_' + str(row.id), uselist=False), - 'value' : association_proxy('asoc', 'value') + 'asoc': relationship('custom_column_' + str(row.id), uselist=False), + 'value': association_proxy('asoc', 'value') } books_custom_column_links[row.id] = type(str('books_custom_column_' + str(row.id) + '_link'), (Base,), dicttable) @@ -462,7 +530,7 @@ class CalibreDB(threading.Thread): 'custom_column_' + str(cc_id[0]), relationship(cc_classes[cc_id[0]], primaryjoin=( - Books.id == cc_classes[cc_id[0]].book), + Books.id == cc_classes[cc_id[0]].book), backref='books')) elif (cc_id[1] == 'series'): setattr(Books, @@ -476,24 +544,27 @@ class CalibreDB(threading.Thread): secondary=books_custom_column_links[cc_id[0]], backref='books')) - Session = scoped_session(sessionmaker(autocommit=False, - autoflush=False, - bind=self.engine)) - self.session = Session() + cls.session_factory = scoped_session(sessionmaker(autocommit=False, + autoflush=True, + bind=cls.engine)) + for inst in cls.instances: + inst.initSession() + + cls._init = True return True def get_book(self, book_id): return self.session.query(Books).filter(Books.id == book_id).first() def get_filtered_book(self, book_id, allow_show_archived=False): - return self.session.query(Books).filter(Books.id == book_id).\ + return self.session.query(Books).filter(Books.id == book_id). \ filter(self.common_filters(allow_show_archived)).first() def get_book_by_uuid(self, book_uuid): return self.session.query(Books).filter(Books.uuid == book_uuid).first() - def get_book_format(self, book_id, format): - return self.session.query(Data).filter(Data.book == book_id).filter(Data.format == format).first() + def get_book_format(self, book_id, file_format): + return self.session.query(Data).filter(Data.book == book_id).filter(Data.format == file_format).first() # Language and content filters for displaying in the UI def common_filters(self, allow_show_archived=False): @@ -533,10 +604,12 @@ class CalibreDB(threading.Thread): pos_content_cc_filter, ~neg_content_cc_filter, archived_filter) # Fill indexpage with all requested data from database - def fill_indexpage(self, page, database, db_filter, order, *join): - return self.fill_indexpage_with_archived_books(page, database, db_filter, order, False, *join) + def fill_indexpage(self, page, pagesize, database, db_filter, order, *join): + return self.fill_indexpage_with_archived_books(page, pagesize, database, db_filter, order, False, *join) - def fill_indexpage_with_archived_books(self, page, database, db_filter, order, allow_show_archived, *join): + def fill_indexpage_with_archived_books(self, page, pagesize, database, db_filter, order, allow_show_archived, + *join): + pagesize = pagesize or self.config.config_books_per_page if current_user.show_detail_random(): randm = self.session.query(Books) \ .filter(self.common_filters(allow_show_archived)) \ @@ -544,16 +617,21 @@ class CalibreDB(threading.Thread): .limit(self.config.config_random_books) else: randm = false() - off = int(int(self.config.config_books_per_page) * (page - 1)) + off = int(int(pagesize) * (page - 1)) query = self.session.query(database) \ .join(*join, isouter=True) \ .filter(db_filter) \ .filter(self.common_filters(allow_show_archived)) - pagination = Pagination(page, self.config.config_books_per_page, - len(query.all())) - entries = query.order_by(*order).offset(off).limit(self.config.config_books_per_page).all() - for book in entries: - book = self.order_authors(book) + entries = list() + pagination = list() + try: + pagination = Pagination(page, pagesize, + len(query.all())) + entries = query.order_by(*order).offset(off).limit(pagesize).all() + except Exception as e: + log.debug_or_exception(e) + #for book in entries: + # book = self.order_authors(book) return entries, randm, pagination # Orders all Authors in the list according to authors sort @@ -561,13 +639,16 @@ class CalibreDB(threading.Thread): sort_authors = entry.author_sort.split('&') authors_ordered = list() error = False + ids = [a.id for a in entry.authors] for auth in sort_authors: + results = self.session.query(Authors).filter(Authors.sort == auth.lstrip().strip()).all() # ToDo: How to handle not found authorname - result = self.session.query(Authors).filter(Authors.sort == auth.lstrip().strip()).first() - if not result: + if not len(results): error = True break - authors_ordered.append(result) + for r in results: + if r.id in ids: + authors_ordered.append(r) if not error: entry.authors = authors_ordered return entry @@ -587,24 +668,37 @@ class CalibreDB(threading.Thread): for authorterm in authorterms: q.append(Books.authors.any(func.lower(Authors.name).ilike("%" + authorterm + "%"))) - return self.session.query(Books)\ + return self.session.query(Books) \ .filter(and_(Books.authors.any(and_(*q)), func.lower(Books.title).ilike("%" + title + "%"))).first() # read search results from calibre-database and return it (function is used for feed and simple search - def get_search_results(self, term): + def get_search_results(self, term, offset=None, order=None, limit=None): + order = order or [Books.sort] + pagination = None term.strip().lower() self.session.connection().connection.connection.create_function("lower", 1, lcase) q = list() authorterms = re.split("[, ]+", term) for authorterm in authorterms: q.append(Books.authors.any(func.lower(Authors.name).ilike("%" + authorterm + "%"))) - return self.session.query(Books).filter(self.common_filters(True)).filter( + result = self.session.query(Books).filter(self.common_filters(True)).filter( or_(Books.tags.any(func.lower(Tags.name).ilike("%" + term + "%")), Books.series.any(func.lower(Series.name).ilike("%" + term + "%")), Books.authors.any(and_(*q)), Books.publishers.any(func.lower(Publishers.name).ilike("%" + term + "%")), func.lower(Books.title).ilike("%" + term + "%") - )).order_by(Books.sort).all() + )).order_by(*order).all() + result_count = len(result) + if offset != None and limit != None: + offset = int(offset) + limit_all = offset + int(limit) + pagination = Pagination((offset / (int(limit)) + 1), limit, result_count) + else: + offset = 0 + limit_all = result_count + + ub.store_ids(result) + return result[offset:limit_all], result_count, pagination # Creates for all stored languages a translated speaking name in the array for the UI def speaking_language(self, languages=None): @@ -638,17 +732,23 @@ class CalibreDB(threading.Thread): conn = conn or self.session.connection().connection.connection conn.create_function("title_sort", 1, _title_sort) - def dispose(self): + @classmethod + def dispose(cls): # global session - old_session = self.session - self.session = None - if old_session: - try: old_session.close() - except: pass - if old_session.bind: - try: old_session.bind.dispose() - except Exception: pass + for inst in cls.instances: + old_session = inst.session + inst.session = None + if old_session: + try: + old_session.close() + except Exception: + pass + if old_session.bind: + try: + old_session.bind.dispose() + except Exception: + pass for attr in list(Books.__dict__.keys()): if attr.startswith("custom_column_"): @@ -665,14 +765,15 @@ class CalibreDB(threading.Thread): Base.metadata.remove(table) def reconnect_db(self, config, app_db_path): - self.session.close() + self.dispose() self.engine.dispose() self.setup_db(config, app_db_path) + def lcase(s): try: return unidecode.unidecode(s.lower()) except Exception as e: log = logger.create() - log.exception(e) + log.debug_or_exception(e) return s.lower() diff --git a/cps/debug_info.py b/cps/debug_info.py new file mode 100644 index 00000000..8f0cdeee --- /dev/null +++ b/cps/debug_info.py @@ -0,0 +1,65 @@ +# -*- coding: utf-8 -*- + +# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web) +# Copyright (C) 2012-2019 cervinko, idalin, SiphonSquirrel, ouzklcn, akushsky, +# OzzieIsaacs, bodybybuddha, jkrehm, matthazinski, janeczku +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +import shutil +import glob +import zipfile +import json +from io import BytesIO +try: + from StringIO import StringIO +except ImportError: + from io import StringIO + +import os + +from flask import send_file + +from . import logger, config +from .about import collect_stats + +log = logger.create() + +def assemble_logfiles(file_name): + log_list = sorted(glob.glob(file_name + '*'), reverse=True) + wfd = StringIO() + for f in log_list: + with open(f, 'r') as fd: + shutil.copyfileobj(fd, wfd) + wfd.seek(0) + return send_file(wfd, + as_attachment=True, + attachment_filename=os.path.basename(file_name)) + +def send_debug(): + file_list = glob.glob(logger.get_logfile(config.config_logfile) + '*') + file_list.extend(glob.glob(logger.get_accesslogfile(config.config_access_logfile) + '*')) + for element in [logger.LOG_TO_STDOUT, logger.LOG_TO_STDERR]: + if element in file_list: + file_list.remove(element) + memory_zip = BytesIO() + with zipfile.ZipFile(memory_zip, 'w', compression=zipfile.ZIP_DEFLATED) as zf: + zf.writestr('settings.txt', json.dumps(config.toDict())) + zf.writestr('libs.txt', json.dumps(collect_stats())) + for fp in file_list: + zf.write(fp, os.path.basename(fp)) + memory_zip.seek(0) + return send_file(memory_zip, + as_attachment=True, + attachment_filename="Calibre-Web-debug-pack.zip") diff --git a/cps/editbooks.py b/cps/editbooks.py index cf1eb4c3..1b1e13a2 100644 --- a/cps/editbooks.py +++ b/cps/editbooks.py @@ -30,18 +30,45 @@ from uuid import uuid4 from flask import Blueprint, request, flash, redirect, url_for, abort, Markup, Response from flask_babel import gettext as _ from flask_login import current_user, login_required -from sqlalchemy.exc import OperationalError - +from sqlalchemy.exc import OperationalError, IntegrityError +from sqlite3 import OperationalError as sqliteOperationalError from . import constants, logger, isoLanguages, gdriveutils, uploader, helper -from . import config, get_locale, ub, worker, db +from . import config, get_locale, ub, db from . import calibre_db -from .web import login_required_if_no_ano, render_title_template, edit_required, upload_required +from .services.worker import WorkerThread +from .tasks.upload import TaskUpload +from .render_template import render_title_template +from .usermanagement import login_required_if_no_ano + +try: + from functools import wraps +except ImportError: + pass # We're not using Python 3 editbook = Blueprint('editbook', __name__) log = logger.create() +def upload_required(f): + @wraps(f) + def inner(*args, **kwargs): + if current_user.role_upload() or current_user.role_admin(): + return f(*args, **kwargs) + abort(403) + + return inner + +def edit_required(f): + @wraps(f) + def inner(*args, **kwargs): + if current_user.role_edit() or current_user.role_admin(): + return f(*args, **kwargs) + abort(403) + + return inner + + # Modifies different Database objects, first check if elements have to be added to database, than check # if elements have to be deleted, because they are no longer used def modify_database_object(input_elements, db_book_object, db_object, db_session, db_type): @@ -172,27 +199,48 @@ def modify_identifiers(input_identifiers, db_identifiers, db_session): changed = True return changed, error - -@editbook.route("/delete//", defaults={'book_format': ""}) -@editbook.route("/delete///") +@editbook.route("/ajax/delete/") @login_required -def delete_book(book_id, book_format): +def delete_book_from_details(book_id): + return Response(delete_book(book_id,"", True), mimetype='application/json') + + +@editbook.route("/delete/", defaults={'book_format': ""}) +@editbook.route("/delete//") +@login_required +def delete_book_ajax(book_id, book_format): + return delete_book(book_id,book_format, False) + +def delete_book(book_id, book_format, jsonResponse): + warning = {} if current_user.role_delete_books(): book = calibre_db.get_book(book_id) if book: try: result, error = helper.delete_book(book, config.config_calibre_dir, book_format=book_format.upper()) if not result: - flash(error, category="error") - return redirect(url_for('editbook.edit_book', book_id=book_id)) + if jsonResponse: + return json.dumps({"location": url_for("editbook.edit_book"), + "type": "alert", + "format": "", + "error": error}), + else: + flash(error, category="error") + return redirect(url_for('editbook.edit_book', book_id=book_id)) if error: - flash(error, category="warning") + if jsonResponse: + warning = {"location": url_for("editbook.edit_book"), + "type": "warning", + "format": "", + "error": error} + else: + flash(error, category="warning") if not book_format: # delete book from Shelfs, Downloads, Read list ub.session.query(ub.BookShelf).filter(ub.BookShelf.book_id == book_id).delete() ub.session.query(ub.ReadBook).filter(ub.ReadBook.book_id == book_id).delete() ub.delete_download(book_id) - ub.session.commit() + ub.session_commit() # check if only this book links to: # author, language, series, tags, custom columns @@ -236,23 +284,34 @@ def delete_book(book_id, book_format): filter(db.Data.format == book_format).delete() calibre_db.session.commit() except Exception as e: - log.debug(e) + log.debug_or_exception(e) calibre_db.session.rollback() else: # book not found log.error('Book with id "%s" could not be deleted: not found', book_id) if book_format: - flash(_('Book Format Successfully Deleted'), category="success") - return redirect(url_for('editbook.edit_book', book_id=book_id)) + if jsonResponse: + return json.dumps([warning, {"location": url_for("editbook.edit_book", book_id=book_id), + "type": "success", + "format": book_format, + "message": _('Book Format Successfully Deleted')}]) + else: + flash(_('Book Format Successfully Deleted'), category="success") + return redirect(url_for('editbook.edit_book', book_id=book_id)) else: - flash(_('Book Successfully Deleted'), category="success") - return redirect(url_for('web.index')) + if jsonResponse: + return json.dumps([warning, {"location": url_for('web.index'), + "type": "success", + "format": book_format, + "message": _('Book Successfully Deleted')}]) + else: + flash(_('Book Successfully Deleted'), category="success") + return redirect(url_for('web.index')) def render_edit_book(book_id): - calibre_db.update_title_sort(config) cc = calibre_db.session.query(db.Custom_Columns).filter(db.Custom_Columns.datatype.notin_(db.cc_exceptions)).all() - book = calibre_db.get_filtered_book(book_id) + book = calibre_db.get_filtered_book(book_id, allow_show_archived=True) if not book: flash(_(u"Error opening eBook. File does not exist or file is not accessible"), category="error") return redirect(url_for("web.index")) @@ -272,7 +331,7 @@ def render_edit_book(book_id): kepub_possible=None if config.config_converterpath: for file in book.data: - if file.format.lower() in constants.EXTENSIONS_CONVERT: + if file.format.lower() in constants.EXTENSIONS_CONVERT_FROM: valid_source_formats.append(file.format.lower()) if config.config_kepubifypath and 'epub' in [file.format.lower() for file in book.data]: kepub_possible = True @@ -281,7 +340,7 @@ def render_edit_book(book_id): # Determine what formats don't already exist if config.config_converterpath: - allowed_conversion_formats = constants.EXTENSIONS_CONVERT[:] + allowed_conversion_formats = constants.EXTENSIONS_CONVERT_TO[:] for file in book.data: if file.format.lower() in allowed_conversion_formats: allowed_conversion_formats.remove(file.format.lower()) @@ -357,7 +416,10 @@ def edit_book_comments(comments, book): def edit_book_languages(languages, book, upload=False): input_languages = languages.split(',') unknown_languages = [] - input_l = isoLanguages.get_language_codes(get_locale(), input_languages, unknown_languages) + if not upload: + input_l = isoLanguages.get_language_codes(get_locale(), input_languages, unknown_languages) + else: + input_l = isoLanguages.get_valid_language_codes(get_locale(), input_languages, unknown_languages) for l in unknown_languages: log.error('%s is not a valid language', l) flash(_(u"%(langname)s is not a valid language", langname=l), category="warning") @@ -367,7 +429,7 @@ def edit_book_languages(languages, book, upload=False): # the book it's language is set to the filter language if input_l[0] != current_user.filter_language() and current_user.filter_language() != "all": input_l[0] = calibre_db.session.query(db.Languages). \ - filter(db.Languages.lang_code == current_user.filter_language()).first() + filter(db.Languages.lang_code == current_user.filter_language()).first().lang_code # Remove duplicates input_l = helper.uniq(input_l) return modify_database_object(input_l, book.languages, db.Languages, calibre_db.session, 'languages') @@ -466,9 +528,11 @@ def upload_single_file(request, book, book_id): requested_file = request.files['btn-upload-format'] # check for empty request if requested_file.filename != '': + if not current_user.role_upload(): + abort(403) if '.' in requested_file.filename: file_ext = requested_file.filename.rsplit('.', 1)[-1].lower() - if file_ext not in constants.EXTENSIONS_UPLOAD: + if file_ext not in constants.EXTENSIONS_UPLOAD and '' not in constants.EXTENSIONS_UPLOAD: flash(_("File extension '%(ext)s' is not allowed to be uploaded to this server", ext=file_ext), category="error") return redirect(url_for('web.show_book', book_id=book.id)) @@ -505,7 +569,7 @@ def upload_single_file(request, book, book_id): calibre_db.session.add(db_format) calibre_db.session.commit() calibre_db.update_title_sort(config) - except OperationalError as e: + except (OperationalError, IntegrityError) as e: calibre_db.session.rollback() log.error('Database error: %s', e) flash(_(u"Database error: %(error)s.", error=e), category="error") @@ -513,8 +577,8 @@ def upload_single_file(request, book, book_id): # Queue uploader info uploadText=_(u"File format %(ext)s added to %(book)s", ext=file_ext.upper(), book=book.title) - worker.add_upload(current_user.nickname, - "" + uploadText + "") + WorkerThread.add(current_user.nickname, TaskUpload( + "" + uploadText + "")) return uploader.process( saved_filename, *os.path.splitext(requested_file.filename), @@ -526,6 +590,8 @@ def upload_cover(request, book): requested_file = request.files['btn-upload-cover'] # check for empty request if requested_file.filename != '': + if not current_user.role_upload(): + abort(403) ret, message = helper.save_cover(requested_file, book.path) if ret is True: return True @@ -540,13 +606,20 @@ def upload_cover(request, book): @edit_required def edit_book(book_id): modif_date = False + + # create the function for sorting... + try: + calibre_db.update_title_sort(config) + except sqliteOperationalError as e: + log.debug_or_exception(e) + calibre_db.session.rollback() + # Show form if request.method != 'POST': return render_edit_book(book_id) - # create the function for sorting... - calibre_db.update_title_sort(config) - book = calibre_db.get_filtered_book(book_id) + + book = calibre_db.get_filtered_book(book_id, allow_show_archived=True) # Book not found if not book: @@ -562,6 +635,7 @@ def edit_book(book_id): merge_metadata(to_save, meta) # Update book edited_books_id = None + #handle book title if book.title != to_save["book_title"].rstrip().strip(): if to_save["book_title"] == '': @@ -605,13 +679,19 @@ def edit_book(book_id): error = helper.update_dir_stucture(edited_books_id, config.config_calibre_dir, input_authors[0]) if not error: - if to_save["cover_url"]: - result, error = helper.save_cover_from_url(to_save["cover_url"], book.path) - if result is True: - book.has_cover = 1 - modif_date = True - else: - flash(error, category="error") + if "cover_url" in to_save: + if to_save["cover_url"]: + if not current_user.role_upload(): + return "", (403) + if to_save["cover_url"].endswith('/static/generic_cover.jpg'): + book.has_cover = 0 + else: + result, error = helper.save_cover_from_url(to_save["cover_url"], book.path) + if result is True: + book.has_cover = 1 + modif_date = True + else: + flash(error, category="error") # Add default series_index to book modif_date |= edit_book_series_index(to_save["series_index"], book) @@ -653,6 +733,7 @@ def edit_book(book_id): if modif_date: book.last_modified = datetime.utcnow() + calibre_db.session.merge(book) calibre_db.session.commit() if config.config_use_google_drive: gdriveutils.updateGdriveCalibreFromLocal() @@ -666,7 +747,7 @@ def edit_book(book_id): flash(error, category="error") return render_edit_book(book_id) except Exception as e: - log.exception(e) + log.debug_or_exception(e) calibre_db.session.rollback() flash(_("Error editing book, please check logfile for details"), category="error") return redirect(url_for('web.show_book', book_id=book.id)) @@ -716,7 +797,7 @@ def upload(): # check if file extension is correct if '.' in requested_file.filename: file_ext = requested_file.filename.rsplit('.', 1)[-1].lower() - if file_ext not in constants.EXTENSIONS_UPLOAD: + if file_ext not in constants.EXTENSIONS_UPLOAD and '' not in constants.EXTENSIONS_UPLOAD: flash( _("File extension '%(ext)s' is not allowed to be uploaded to this server", ext=file_ext), category="error") @@ -768,42 +849,17 @@ def upload(): if not db_author: db_author = stored_author sort_author = stored_author.sort - sort_authors_list.append(sort_author) # helper.get_sorted_author(sort_author)) + sort_authors_list.append(sort_author) sort_authors = ' & '.join(sort_authors_list) title_dir = helper.get_valid_filename(title) author_dir = helper.get_valid_filename(db_author.name) - filepath = os.path.join(config.config_calibre_dir, author_dir, title_dir) - saved_filename = os.path.join(filepath, title_dir + meta.extension.lower()) - - # check if file path exists, otherwise create it, copy file to calibre path and delete temp file - if not os.path.exists(filepath): - try: - os.makedirs(filepath) - except OSError: - log.error("Failed to create path %s (Permission denied)", filepath) - flash(_(u"Failed to create path %(path)s (Permission denied).", path=filepath), category="error") - return Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json') - try: - copyfile(meta.file_path, saved_filename) - os.unlink(meta.file_path) - except OSError as e: - log.error("Failed to move file %s: %s", saved_filename, e) - flash(_(u"Failed to Move File %(file)s: %(error)s", file=saved_filename, error=e), category="error") - return Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json') - - if meta.cover is None: - has_cover = 0 - copyfile(os.path.join(constants.STATIC_DIR, 'generic_cover.jpg'), - os.path.join(filepath, "cover.jpg")) - else: - has_cover = 1 # combine path and normalize path from windows systems path = os.path.join(author_dir, title_dir).replace('\\', '/') # Calibre adds books with utc as timezone db_book = db.Books(title, "", sort_authors, datetime.utcnow(), datetime(101, 1, 1), - '1', datetime.utcnow(), path, has_cover, db_author, [], "") + '1', datetime.utcnow(), path, meta.cover, db_author, [], "") modif_date |= modify_database_object(input_authors, db_book.authors, db.Authors, calibre_db.session, 'author') @@ -821,7 +877,7 @@ def upload(): modif_date |= edit_book_series(meta.series, db_book) # Add file to book - file_size = os.path.getsize(saved_filename) + file_size = os.path.getsize(meta.file_path) db_data = db.Data(db_book, meta.extension.upper()[1:], file_size, title_dir) db_book.data.append(db_data) calibre_db.session.add(db_book) @@ -829,39 +885,44 @@ def upload(): # flush content, get db_book.id available calibre_db.session.flush() - # Comments needs book id therfore only possiblw after flush + # Comments needs book id therfore only possible after flush modif_date |= edit_book_comments(Markup(meta.description).unescape(), db_book) book_id = db_book.id title = db_book.title - error = helper.update_dir_stucture(book_id, config.config_calibre_dir, input_authors[0]) + error = helper.update_dir_structure_file(book_id, + config.config_calibre_dir, + input_authors[0], + meta.file_path, + title_dir + meta.extension) # move cover to final directory, including book id - if has_cover: - new_coverpath = os.path.join(config.config_calibre_dir, db_book.path, "cover.jpg") - try: - copyfile(meta.cover, new_coverpath) + if meta.cover: + coverfile = meta.cover + else: + coverfile = os.path.join(constants.STATIC_DIR, 'generic_cover.jpg') + new_coverpath = os.path.join(config.config_calibre_dir, db_book.path, "cover.jpg") + try: + copyfile(coverfile, new_coverpath) + if meta.cover: os.unlink(meta.cover) - except OSError as e: - log.error("Failed to move cover file %s: %s", new_coverpath, e) - flash(_(u"Failed to Move Cover File %(file)s: %(error)s", file=new_coverpath, - error=e), - category="error") + except OSError as e: + log.error("Failed to move cover file %s: %s", new_coverpath, e) + flash(_(u"Failed to Move Cover File %(file)s: %(error)s", file=new_coverpath, + error=e), + category="error") # save data to database, reread data calibre_db.session.commit() - #calibre_db.setup_db(config, ub.app_DB_path) - # Reread book. It's important not to filter the result, as it could have language which hide it from - # current users view (tags are not stored/extracted from metadata and could also be limited) - #book = calibre_db.get_book(book_id) + if config.config_use_google_drive: gdriveutils.updateGdriveCalibreFromLocal() if error: flash(error, category="error") uploadText=_(u"File %(file)s uploaded", file=title) - worker.add_upload(current_user.nickname, - "" + uploadText + "") + WorkerThread.add(current_user.nickname, TaskUpload( + "" + uploadText + "")) if len(request.files.getlist("btn-upload")) < 2: if current_user.role_edit() or current_user.role_admin(): @@ -870,7 +931,7 @@ def upload(): else: resp = {"location": url_for('web.show_book', book_id=book_id)} return Response(json.dumps(resp), mimetype='application/json') - except OperationalError as e: + except (OperationalError, IntegrityError) as e: calibre_db.session.rollback() log.error("Database error: %s", e) flash(_(u"Database error: %(error)s.", error=e), category="error") @@ -899,3 +960,113 @@ def convert_bookformat(book_id): else: flash(_(u"There was an error converting this book: %(res)s", res=rtn), category="error") return redirect(url_for('editbook.edit_book', book_id=book_id)) + +@editbook.route("/ajax/editbooks/", methods=['POST']) +@login_required_if_no_ano +@edit_required +def edit_list_book(param): + vals = request.form.to_dict() + book = calibre_db.get_book(vals['pk']) + if param =='series_index': + edit_book_series_index(vals['value'], book) + elif param =='tags': + edit_book_tags(vals['value'], book) + elif param =='series': + edit_book_series(vals['value'], book) + elif param =='publishers': + vals['publisher'] = vals['value'] + edit_book_publisher(vals, book) + elif param =='languages': + edit_book_languages(vals['value'], book) + elif param =='author_sort': + book.author_sort = vals['value'] + elif param =='title': + book.title = vals['value'] + helper.update_dir_stucture(book.id, config.config_calibre_dir) + elif param =='sort': + book.sort = vals['value'] + # ToDo: edit books + elif param =='authors': + input_authors = vals['value'].split('&') + input_authors = list(map(lambda it: it.strip().replace(',', '|'), input_authors)) + modify_database_object(input_authors, book.authors, db.Authors, calibre_db.session, 'author') + sort_authors_list = list() + for inp in input_authors: + stored_author = calibre_db.session.query(db.Authors).filter(db.Authors.name == inp).first() + if not stored_author: + stored_author = helper.get_sorted_author(inp) + else: + stored_author = stored_author.sort + sort_authors_list.append(helper.get_sorted_author(stored_author)) + sort_authors = ' & '.join(sort_authors_list) + if book.author_sort != sort_authors: + book.author_sort = sort_authors + helper.update_dir_stucture(book.id, config.config_calibre_dir, input_authors[0]) + book.last_modified = datetime.utcnow() + calibre_db.session.commit() + return "" + +@editbook.route("/ajax/sort_value//") +@login_required +def get_sorted_entry(field, bookid): + if field == 'title' or field == 'authors': + book = calibre_db.get_filtered_book(bookid) + if book: + if field == 'title': + return json.dumps({'sort': book.sort}) + elif field == 'authors': + return json.dumps({'author_sort': book.author_sort}) + return "" + + +@editbook.route("/ajax/simulatemerge", methods=['POST']) +@login_required +@edit_required +def simulate_merge_list_book(): + vals = request.get_json().get('Merge_books') + if vals: + to_book = calibre_db.get_book(vals[0]).title + vals.pop(0) + if to_book: + for book_id in vals: + from_book = [] + from_book.append(calibre_db.get_book(book_id).title) + return json.dumps({'to': to_book, 'from': from_book}) + return "" + + +@editbook.route("/ajax/mergebooks", methods=['POST']) +@login_required +@edit_required +def merge_list_book(): + vals = request.get_json().get('Merge_books') + to_file = list() + if vals: + # load all formats from target book + to_book = calibre_db.get_book(vals[0]) + vals.pop(0) + if to_book: + for file in to_book.data: + to_file.append(file.format) + to_name = helper.get_valid_filename(to_book.title) + ' - ' + \ + helper.get_valid_filename(to_book.authors[0].name) + for book_id in vals: + from_book = calibre_db.get_book(book_id) + if from_book: + for element in from_book.data: + if element.format not in to_file: + # create new data entry with: book_id, book_format, uncompressed_size, name + filepath_new = os.path.normpath(os.path.join(config.config_calibre_dir, + to_book.path, + to_name + "." + element.format.lower())) + filepath_old = os.path.normpath(os.path.join(config.config_calibre_dir, + from_book.path, + element.name + "." + element.format.lower())) + copyfile(filepath_old, filepath_new) + to_book.data.append(db.Data(to_book.id, + element.format, + element.uncompressed_size, + to_name)) + delete_book(from_book.id,"", True) # json_resp = + return json.dumps({'success': True}) + return "" diff --git a/cps/epub.py b/cps/epub.py index bdba0607..583e4eda 100644 --- a/cps/epub.py +++ b/cps/epub.py @@ -26,6 +26,7 @@ from .helper import split_authors from .constants import BookMeta + def extractCover(zipFile, coverFile, coverpath, tmp_file_name): if coverFile is None: return None @@ -65,11 +66,11 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension): tmp = p.xpath('dc:%s/text()' % s, namespaces=ns) if len(tmp) > 0: if s == 'creator': - epub_metadata[s] = ' & '.join(split_authors(p.xpath('dc:%s/text()' % s, namespaces=ns))) + epub_metadata[s] = ' & '.join(split_authors(tmp)) elif s == 'subject': - epub_metadata[s] = ', '.join(p.xpath('dc:%s/text()' % s, namespaces=ns)) + epub_metadata[s] = ', '.join(tmp) else: - epub_metadata[s] = p.xpath('dc:%s/text()' % s, namespaces=ns)[0] + epub_metadata[s] = tmp[0] else: epub_metadata[s] = u'Unknown' @@ -83,16 +84,8 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension): else: epub_metadata['description'] = "" - if epub_metadata['language'] == u'Unknown': - epub_metadata['language'] = "" - else: - lang = epub_metadata['language'].split('-', 1)[0].lower() - if len(lang) == 2: - epub_metadata['language'] = isoLanguages.get(part1=lang).name - elif len(lang) == 3: - epub_metadata['language'] = isoLanguages.get(part3=lang).name - else: - epub_metadata['language'] = "" + lang = epub_metadata['language'].split('-', 1)[0].lower() + epub_metadata['language'] = isoLanguages.get_lang3(lang) series = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='calibre:series']/@content", namespaces=ns) if len(series) > 0: diff --git a/cps/error_handler.py b/cps/error_handler.py new file mode 100644 index 00000000..e9cb601a --- /dev/null +++ b/cps/error_handler.py @@ -0,0 +1,73 @@ +# -*- coding: utf-8 -*- + +# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web) +# Copyright (C) 2018-2020 OzzieIsaacs +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +import traceback +from flask import render_template +from werkzeug.exceptions import default_exceptions +try: + from werkzeug.exceptions import FailedDependency +except ImportError: + from werkzeug.exceptions import UnprocessableEntity as FailedDependency + +from . import config, app, logger, services + + +log = logger.create() + +# custom error page +def error_http(error): + return render_template('http_error.html', + error_code="Error {0}".format(error.code), + error_name=error.name, + issue=False, + instance=config.config_calibre_web_title + ), error.code + + +def internal_error(error): + return render_template('http_error.html', + error_code="Internal Server Error", + error_name=str(error), + issue=True, + error_stack=traceback.format_exc().split("\n"), + instance=config.config_calibre_web_title + ), 500 + +def init_errorhandler(): + # http error handling + for ex in default_exceptions: + if ex < 500: + app.register_error_handler(ex, error_http) + elif ex == 500: + app.register_error_handler(ex, internal_error) + + + if services.ldap: + # Only way of catching the LDAPException upon logging in with LDAP server down + @app.errorhandler(services.ldap.LDAPException) + def handle_exception(e): + log.debug('LDAP server not accessible while trying to login to opds feed') + return error_http(FailedDependency()) + + +# @app.errorhandler(InvalidRequestError) +#@app.errorhandler(OperationalError) +#def handle_db_exception(e): +# db.session.rollback() +# log.error('Database request error: %s',e) +# return internal_error(InternalServerError(e)) diff --git a/cps/gdrive.py b/cps/gdrive.py index 82a19890..950f3ce2 100644 --- a/cps/gdrive.py +++ b/cps/gdrive.py @@ -35,22 +35,22 @@ from flask_babel import gettext as _ from flask_login import login_required from . import logger, gdriveutils, config, ub, calibre_db -from .web import admin_required +from .admin import admin_required -gdrive = Blueprint('gdrive', __name__) +gdrive = Blueprint('gdrive', __name__, url_prefix='/gdrive') log = logger.create() try: from googleapiclient.errors import HttpError except ImportError as err: - log.debug(("Cannot import googleapiclient, using gdrive will not work: %s", err)) + log.debug("Cannot import googleapiclient, using GDrive will not work: %s", err) current_milli_time = lambda: int(round(time() * 1000)) -gdrive_watch_callback_token = 'target=calibreweb-watch_files' +gdrive_watch_callback_token = 'target=calibreweb-watch_files' #nosec -@gdrive.route("/gdrive/authenticate") +@gdrive.route("/authenticate") @login_required @admin_required def authenticate_google_drive(): @@ -63,7 +63,7 @@ def authenticate_google_drive(): return redirect(authUrl) -@gdrive.route("/gdrive/callback") +@gdrive.route("/callback") def google_drive_callback(): auth_code = request.args.get('code') if not auth_code: @@ -77,18 +77,14 @@ def google_drive_callback(): return redirect(url_for('admin.configuration')) -@gdrive.route("/gdrive/watch/subscribe") +@gdrive.route("/watch/subscribe") @login_required @admin_required def watch_gdrive(): if not config.config_google_drive_watch_changes_response: with open(gdriveutils.CLIENT_SECRETS, 'r') as settings: filedata = json.load(settings) - if filedata['web']['redirect_uris'][0].endswith('/'): - filedata['web']['redirect_uris'][0] = filedata['web']['redirect_uris'][0][:-((len('/gdrive/callback')+1))] - else: - filedata['web']['redirect_uris'][0] = filedata['web']['redirect_uris'][0][:-(len('/gdrive/callback'))] - address = '%s/gdrive/watch/callback' % filedata['web']['redirect_uris'][0] + address = filedata['web']['redirect_uris'][0].rstrip('/').replace('/gdrive/callback', '/gdrive/watch/callback') notification_id = str(uuid4()) try: result = gdriveutils.watchChange(gdriveutils.Gdrive.Instance().drive, notification_id, @@ -98,14 +94,15 @@ def watch_gdrive(): except HttpError as e: reason=json.loads(e.content)['error']['errors'][0] if reason['reason'] == u'push.webhookUrlUnauthorized': - flash(_(u'Callback domain is not verified, please follow steps to verify domain in google developer console'), category="error") + flash(_(u'Callback domain is not verified, ' + u'please follow steps to verify domain in google developer console'), category="error") else: flash(reason['message'], category="error") return redirect(url_for('admin.configuration')) -@gdrive.route("/gdrive/watch/revoke") +@gdrive.route("/watch/revoke") @login_required @admin_required def revoke_watch_gdrive(): @@ -121,40 +118,43 @@ def revoke_watch_gdrive(): return redirect(url_for('admin.configuration')) -@gdrive.route("/gdrive/watch/callback", methods=['GET', 'POST']) +@gdrive.route("/watch/callback", methods=['GET', 'POST']) def on_received_watch_confirmation(): + if not config.config_google_drive_watch_changes_response: + return '' + if request.headers.get('X-Goog-Channel-Token') != gdrive_watch_callback_token \ + or request.headers.get('X-Goog-Resource-State') != 'change' \ + or not request.data: + return '' + log.debug('%r', request.headers) - if request.headers.get('X-Goog-Channel-Token') == gdrive_watch_callback_token \ - and request.headers.get('X-Goog-Resource-State') == 'change' \ - and request.data: + log.debug('%r', request.data) + log.info('Change received from gdrive') - data = request.data + try: + j = json.loads(request.data) + log.info('Getting change details') + response = gdriveutils.getChangeById(gdriveutils.Gdrive.Instance().drive, j['id']) + log.debug('%r', response) + if response: + if sys.version_info < (3, 0): + dbpath = os.path.join(config.config_calibre_dir, "metadata.db") + else: + dbpath = os.path.join(config.config_calibre_dir, "metadata.db").encode() + if not response['deleted'] and response['file']['title'] == 'metadata.db' \ + and response['file']['md5Checksum'] != hashlib.md5(dbpath): + tmp_dir = os.path.join(tempfile.gettempdir(), 'calibre_web') + if not os.path.isdir(tmp_dir): + os.mkdir(tmp_dir) - def updateMetaData(): - log.info('Change received from gdrive') - log.debug('%r', data) - try: - j = json.loads(data) - log.info('Getting change details') - response = gdriveutils.getChangeById(gdriveutils.Gdrive.Instance().drive, j['id']) - log.debug('%r', response) - if response: - if sys.version_info < (3, 0): - dbpath = os.path.join(config.config_calibre_dir, "metadata.db") - else: - dbpath = os.path.join(config.config_calibre_dir, "metadata.db").encode() - if not response['deleted'] and response['file']['title'] == 'metadata.db' \ - and response['file']['md5Checksum'] != hashlib.md5(dbpath): - tmpDir = tempfile.gettempdir() - log.info('Database file updated') - copyfile(dbpath, os.path.join(tmpDir, "metadata.db_" + str(current_milli_time()))) - log.info('Backing up existing and downloading updated metadata.db') - gdriveutils.downloadFile(None, "metadata.db", os.path.join(tmpDir, "tmp_metadata.db")) - log.info('Setting up new DB') - # prevent error on windows, as os.rename does on exisiting files - move(os.path.join(tmpDir, "tmp_metadata.db"), dbpath) - calibre_db.reconnect_db(config, ub.app_DB_path) - except Exception as e: - log.exception(e) - updateMetaData() + log.info('Database file updated') + copyfile(dbpath, os.path.join(tmp_dir, "metadata.db_" + str(current_milli_time()))) + log.info('Backing up existing and downloading updated metadata.db') + gdriveutils.downloadFile(None, "metadata.db", os.path.join(tmp_dir, "tmp_metadata.db")) + log.info('Setting up new DB') + # prevent error on windows, as os.rename does on existing files, also allow cross hdd move + move(os.path.join(tmp_dir, "tmp_metadata.db"), dbpath) + calibre_db.reconnect_db(config, ub.app_DB_path) + except Exception as e: + log.debug_or_exception(e) return '' diff --git a/cps/gdriveutils.py b/cps/gdriveutils.py index a996f879..79587b79 100644 --- a/cps/gdriveutils.py +++ b/cps/gdriveutils.py @@ -20,6 +20,8 @@ from __future__ import division, print_function, unicode_literals import os import json import shutil +import chardet +import ssl from flask import Response, stream_with_context from sqlalchemy import create_engine @@ -30,16 +32,25 @@ from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.exc import OperationalError, InvalidRequestError try: - from pydrive.auth import GoogleAuth - from pydrive.drive import GoogleDrive - from pydrive.auth import RefreshError from apiclient import errors from httplib2 import ServerNotFoundError - gdrive_support = True importError = None -except ImportError as err: - importError = err + gdrive_support = True +except ImportError as e: + importError = e gdrive_support = False +try: + from pydrive2.auth import GoogleAuth + from pydrive2.drive import GoogleDrive + from pydrive2.auth import RefreshError +except ImportError as err: + try: + from pydrive.auth import GoogleAuth + from pydrive.drive import GoogleDrive + from pydrive.auth import RefreshError + except ImportError as err: + importError = err + gdrive_support = False from . import logger, cli, config from .constants import CONFIG_DIR as _CONFIG_DIR @@ -89,7 +100,7 @@ class Singleton: except AttributeError: self._instance = self._decorated() return self._instance - except ImportError as e: + except (ImportError, NameError) as e: log.debug(e) return None @@ -188,7 +199,7 @@ def getDrive(drive=None, gauth=None): except RefreshError as e: log.error("Google Drive error: %s", e) except Exception as e: - log.exception(e) + log.debug_or_exception(e) else: # Initialize the saved creds gauth.Authorize() @@ -206,14 +217,14 @@ def listRootFolders(): drive = getDrive(Gdrive.Instance().drive) folder = "'root' in parents and mimeType = 'application/vnd.google-apps.folder' and trashed = false" fileList = drive.ListFile({'q': folder}).GetList() - except ServerNotFoundError as e: + except (ServerNotFoundError, ssl.SSLError) as e: log.info("GDrive Error %s" % e) fileList = [] return fileList def getEbooksFolder(drive): - return getFolderInFolder('root',config.config_google_drive_folder,drive) + return getFolderInFolder('root', config.config_google_drive_folder, drive) def getFolderInFolder(parentId, folderName, drive): @@ -243,7 +254,7 @@ def getEbooksFolderId(drive=None): gDriveId.path = '/' session.merge(gDriveId) session.commit() - return + return gDriveId.gdrive_id def getFile(pathId, fileName, drive): @@ -383,7 +394,8 @@ def uploadFileToEbooksFolder(destFile, f): if len(existingFiles) > 0: driveFile = existingFiles[0] else: - driveFile = drive.CreateFile({'title': x, 'parents': [{"kind": "drive#fileLink", 'id': parent['id']}],}) + driveFile = drive.CreateFile({'title': x, + 'parents': [{"kind": "drive#fileLink", 'id': parent['id']}], }) driveFile.SetContentFile(f) driveFile.Upload() else: @@ -545,21 +557,24 @@ def partial(total_byte_len, part_size_limit): return s # downloads files in chunks from gdrive -def do_gdrive_download(df, headers): +def do_gdrive_download(df, headers, convert_encoding=False): total_size = int(df.metadata.get('fileSize')) download_url = df.metadata.get('downloadUrl') s = partial(total_size, 1024 * 1024) # I'm downloading BIG files, so 100M chunk size is fine for me - def stream(): + def stream(convert_encoding): for byte in s: headers = {"Range": 'bytes=%s-%s' % (byte[0], byte[1])} resp, content = df.auth.Get_Http_Object().request(download_url, headers=headers) if resp.status == 206: + if convert_encoding: + result = chardet.detect(content) + content = content.decode(result['encoding']).encode('utf-8') yield content else: log.warning('An error occurred: %s', resp) return - return Response(stream_with_context(stream()), headers=headers) + return Response(stream_with_context(stream(convert_encoding)), headers=headers) _SETTINGS_YAML_TEMPLATE = """ diff --git a/cps/helper.py b/cps/helper.py index d40128d7..88c0550b 100644 --- a/cps/helper.py +++ b/cps/helper.py @@ -32,13 +32,12 @@ from tempfile import gettempdir import requests from babel.dates import format_datetime from babel.units import format_unit -from flask import send_from_directory, make_response, redirect, abort +from flask import send_from_directory, make_response, redirect, abort, url_for from flask_babel import gettext as _ from flask_login import current_user -from sqlalchemy.sql.expression import true, false, and_, text, func +from sqlalchemy.sql.expression import true, false, and_, text from werkzeug.datastructures import Headers from werkzeug.security import generate_password_hash -from . import calibre_db try: from urllib.parse import quote @@ -51,141 +50,155 @@ try: except ImportError: use_unidecode = False -try: - from PIL import Image as PILImage - from PIL import UnidentifiedImageError - use_PIL = True -except ImportError: - use_PIL = False - -from . import logger, config, get_locale, db, ub, worker +from . import calibre_db +from .tasks.convert import TaskConvert +from . import logger, config, get_locale, db, ub from . import gdriveutils as gd from .constants import STATIC_DIR as _STATIC_DIR from .subproc_wrapper import process_wait -from .worker import STAT_WAITING, STAT_FAIL, STAT_STARTED, STAT_FINISH_SUCCESS -from .worker import TASK_EMAIL, TASK_CONVERT, TASK_UPLOAD, TASK_CONVERT_ANY - +from .services.worker import WorkerThread, STAT_WAITING, STAT_FAIL, STAT_STARTED, STAT_FINISH_SUCCESS +from .tasks.mail import TaskEmail log = logger.create() +try: + from wand.image import Image + from wand.exceptions import MissingDelegateError + use_IM = True +except (ImportError, RuntimeError) as e: + log.debug('Cannot import Image, generating covers from non jpg files will not work: %s', e) + use_IM = False + MissingDelegateError = BaseException + # Convert existing book entry to new format def convert_book_format(book_id, calibrepath, old_book_format, new_book_format, user_id, kindle_mail=None): book = calibre_db.get_book(book_id) data = calibre_db.get_book_format(book.id, old_book_format) + file_path = os.path.join(calibrepath, book.path, data.name) if not data: error_message = _(u"%(format)s format not found for book id: %(book)d", format=old_book_format, book=book_id) log.error("convert_book_format: %s", error_message) return error_message if config.config_use_google_drive: - df = gd.getFileFromEbooksFolder(book.path, data.name + "." + old_book_format.lower()) - if df: - datafile = os.path.join(calibrepath, book.path, data.name + u"." + old_book_format.lower()) - if not os.path.exists(os.path.join(calibrepath, book.path)): - os.makedirs(os.path.join(calibrepath, book.path)) - df.GetContentFile(datafile) - else: + if not gd.getFileFromEbooksFolder(book.path, data.name + "." + old_book_format.lower()): error_message = _(u"%(format)s not found on Google Drive: %(fn)s", format=old_book_format, fn=data.name + "." + old_book_format.lower()) return error_message - file_path = os.path.join(calibrepath, book.path, data.name) - if os.path.exists(file_path + "." + old_book_format.lower()): - # read settings and append converter task to queue - if kindle_mail: - settings = config.get_mail_settings() - settings['subject'] = _('Send to Kindle') # pretranslate Subject for e-mail - settings['body'] = _(u'This e-mail has been sent via Calibre-Web.') - # text = _(u"%(format)s: %(book)s", format=new_book_format, book=book.title) - else: - settings = dict() - txt = (u"%s -> %s: %s" % (old_book_format, new_book_format, book.title)) - settings['old_book_format'] = old_book_format - settings['new_book_format'] = new_book_format - worker.add_convert(file_path, book.id, user_id, txt, settings, kindle_mail) - return None else: - error_message = _(u"%(format)s not found: %(fn)s", - format=old_book_format, fn=data.name + "." + old_book_format.lower()) - return error_message + if not os.path.exists(file_path + "." + old_book_format.lower()): + error_message = _(u"%(format)s not found: %(fn)s", + format=old_book_format, fn=data.name + "." + old_book_format.lower()) + return error_message + # read settings and append converter task to queue + if kindle_mail: + settings = config.get_mail_settings() + settings['subject'] = _('Send to Kindle') # pretranslate Subject for e-mail + settings['body'] = _(u'This e-mail has been sent via Calibre-Web.') + else: + settings = dict() + txt = (u"%s -> %s: %s" % ( + old_book_format, + new_book_format, + "" + book.title + "")) + settings['old_book_format'] = old_book_format + settings['new_book_format'] = new_book_format + WorkerThread.add(user_id, TaskConvert(file_path, book.id, txt, settings, kindle_mail, user_id)) + return None def send_test_mail(kindle_mail, user_name): - worker.add_email(_(u'Calibre-Web test e-mail'), None, None, - config.get_mail_settings(), kindle_mail, user_name, - _(u"Test e-mail"), _(u'This e-mail has been sent via Calibre-Web.')) + WorkerThread.add(user_name, TaskEmail(_(u'Calibre-Web test e-mail'), None, None, + config.get_mail_settings(), kindle_mail, _(u"Test e-mail"), + _(u'This e-mail has been sent via Calibre-Web.'))) return # Send registration email or password reset email, depending on parameter resend (False means welcome email) def send_registration_mail(e_mail, user_name, default_password, resend=False): - text = "Hello %s!\r\n" % user_name + txt = "Hello %s!\r\n" % user_name if not resend: - text += "Your new account at Calibre-Web has been created. Thanks for joining us!\r\n" - text += "Please log in to your account using the following informations:\r\n" - text += "User name: %s\r\n" % user_name - text += "Password: %s\r\n" % default_password - text += "Don't forget to change your password after first login.\r\n" - text += "Sincerely\r\n\r\n" - text += "Your Calibre-Web team" - worker.add_email(_(u'Get Started with Calibre-Web'), None, None, - config.get_mail_settings(), e_mail, None, - _(u"Registration e-mail for user: %(name)s", name=user_name), text) + txt += "Your new account at Calibre-Web has been created. Thanks for joining us!\r\n" + txt += "Please log in to your account using the following informations:\r\n" + txt += "User name: %s\r\n" % user_name + txt += "Password: %s\r\n" % default_password + txt += "Don't forget to change your password after first login.\r\n" + txt += "Sincerely\r\n\r\n" + txt += "Your Calibre-Web team" + WorkerThread.add(None, TaskEmail( + subject=_(u'Get Started with Calibre-Web'), + filepath=None, + attachment=None, + settings=config.get_mail_settings(), + recipient=e_mail, + taskMessage=_(u"Registration e-mail for user: %(name)s", name=user_name), + text=txt + )) return +def check_send_to_kindle_without_converter(entry): + bookformats = list() + # no converter - only for mobi and pdf formats + for ele in iter(entry.data): + if ele.uncompressed_size < config.mail_size: + if 'MOBI' in ele.format: + bookformats.append({'format': 'Mobi', + 'convert': 0, + 'text': _('Send %(format)s to Kindle', format='Mobi')}) + if 'PDF' in ele.format: + bookformats.append({'format': 'Pdf', + 'convert': 0, + 'text': _('Send %(format)s to Kindle', format='Pdf')}) + if 'AZW' in ele.format: + bookformats.append({'format': 'Azw', + 'convert': 0, + 'text': _('Send %(format)s to Kindle', format='Azw')}) + return bookformats + +def check_send_to_kindle_with_converter(entry): + bookformats = list() + formats = list() + for ele in iter(entry.data): + if ele.uncompressed_size < config.mail_size: + formats.append(ele.format) + if 'MOBI' in formats: + bookformats.append({'format': 'Mobi', + 'convert': 0, + 'text': _('Send %(format)s to Kindle', format='Mobi')}) + if 'AZW' in formats: + bookformats.append({'format': 'Azw', + 'convert': 0, + 'text': _('Send %(format)s to Kindle', format='Azw')}) + if 'PDF' in formats: + bookformats.append({'format': 'Pdf', + 'convert': 0, + 'text': _('Send %(format)s to Kindle', format='Pdf')}) + if 'EPUB' in formats and 'MOBI' not in formats: + bookformats.append({'format': 'Mobi', + 'convert': 1, + 'text': _('Convert %(orig)s to %(format)s and send to Kindle', + orig='Epub', + format='Mobi')}) + if 'AZW3' in formats and not 'MOBI' in formats: + bookformats.append({'format': 'Mobi', + 'convert': 2, + 'text': _('Convert %(orig)s to %(format)s and send to Kindle', + orig='Azw3', + format='Mobi')}) + return bookformats + + def check_send_to_kindle(entry): """ returns all available book formats for sending to Kindle """ if len(entry.data): - bookformats = list() if not config.config_converterpath: - # no converter - only for mobi and pdf formats - for ele in iter(entry.data): - if ele.uncompressed_size < config.mail_size: - if 'MOBI' in ele.format: - bookformats.append({'format': 'Mobi', - 'convert': 0, - 'text': _('Send %(format)s to Kindle', format='Mobi')}) - if 'PDF' in ele.format: - bookformats.append({'format': 'Pdf', - 'convert': 0, - 'text': _('Send %(format)s to Kindle', format='Pdf')}) - if 'AZW' in ele.format: - bookformats.append({'format': 'Azw', - 'convert': 0, - 'text': _('Send %(format)s to Kindle', format='Azw')}) + book_formats = check_send_to_kindle_with_converter(entry) else: - formats = list() - for ele in iter(entry.data): - if ele.uncompressed_size < config.mail_size: - formats.append(ele.format) - if 'MOBI' in formats: - bookformats.append({'format': 'Mobi', - 'convert': 0, - 'text': _('Send %(format)s to Kindle', format='Mobi')}) - if 'AZW' in formats: - bookformats.append({'format': 'Azw', - 'convert': 0, - 'text': _('Send %(format)s to Kindle', format='Azw')}) - if 'PDF' in formats: - bookformats.append({'format': 'Pdf', - 'convert': 0, - 'text': _('Send %(format)s to Kindle', format='Pdf')}) - if config.config_converterpath: - if 'EPUB' in formats and not 'MOBI' in formats: - bookformats.append({'format': 'Mobi', - 'convert':1, - 'text': _('Convert %(orig)s to %(format)s and send to Kindle', - orig='Epub', - format='Mobi')}) - if 'AZW3' in formats and not 'MOBI' in formats: - bookformats.append({'format': 'Mobi', - 'convert': 2, - 'text': _('Convert %(orig)s to %(format)s and send to Kindle', - orig='Azw3', - format='Mobi')}) - return bookformats + book_formats = check_send_to_kindle_with_converter(entry) + return book_formats else: log.error(u'Cannot find book entry %d', entry.id) return None @@ -221,9 +234,9 @@ def send_mail(book_id, book_format, convert, kindle_mail, calibrepath, user_id): for entry in iter(book.data): if entry.format.upper() == book_format.upper(): converted_file_name = entry.name + '.' + book_format.lower() - worker.add_email(_(u"Send to Kindle"), book.path, converted_file_name, - config.get_mail_settings(), kindle_mail, user_id, - _(u"E-mail: %(book)s", book=book.title), _(u'This e-mail has been sent via Calibre-Web.')) + WorkerThread.add(user_id, TaskEmail(_(u"Send to Kindle"), book.path, converted_file_name, + config.get_mail_settings(), kindle_mail, + _(u"E-mail: %(book)s", book=book.title), _(u'This e-mail has been sent via Calibre-Web.'))) return return _(u"The requested file could not be read. Maybe wrong permissions?") @@ -343,66 +356,74 @@ def delete_book_file(book, calibrepath, book_format=None): path=book.path) -def update_dir_structure_file(book_id, calibrepath, first_author): +# Moves files in file storage during author/title rename, or from temp dir to file storage +def update_dir_structure_file(book_id, calibrepath, first_author, orignal_filepath, db_filename): + # get book database entry from id, if original path overwrite source with original_filepath localbook = calibre_db.get_book(book_id) - path = os.path.join(calibrepath, localbook.path) + if orignal_filepath: + path = orignal_filepath + else: + path = os.path.join(calibrepath, localbook.path) + # Create (current) authordir and titledir from database authordir = localbook.path.split('/')[0] + titledir = localbook.path.split('/')[1] + + # Create new_authordir from parameter or from database + # Create new titledir from database and add id if first_author: new_authordir = get_valid_filename(first_author) else: new_authordir = get_valid_filename(localbook.authors[0].name) - - titledir = localbook.path.split('/')[1] new_titledir = get_valid_filename(localbook.title) + " (" + str(book_id) + ")" - if titledir != new_titledir: - new_title_path = os.path.join(os.path.dirname(path), new_titledir) + if titledir != new_titledir or authordir != new_authordir or orignal_filepath: + new_path = os.path.join(calibrepath, new_authordir, new_titledir) + new_name = get_valid_filename(localbook.title) + ' - ' + get_valid_filename(new_authordir) try: - if not os.path.exists(new_title_path): - os.renames(os.path.normcase(path), os.path.normcase(new_title_path)) + if orignal_filepath: + if not os.path.isdir(new_path): + os.makedirs(new_path) + shutil.move(os.path.normcase(path), os.path.normcase(os.path.join(new_path, db_filename))) + log.debug("Moving title: %s to %s/%s", path, new_path, new_name) + # Check new path is not valid path else: - log.info("Copying title: %s into existing: %s", path, new_title_path) - for dir_name, __, file_list in os.walk(path): - for file in file_list: - os.renames(os.path.normcase(os.path.join(dir_name, file)), - os.path.normcase(os.path.join(new_title_path + dir_name[len(path):], file))) - path = new_title_path - localbook.path = localbook.path.split('/')[0] + '/' + new_titledir + if not os.path.exists(new_path): + # move original path to new path + log.debug("Moving title: %s to %s", path, new_path) + shutil.move(os.path.normcase(path), os.path.normcase(new_path)) + else: # path is valid copy only files to new location (merge) + log.info("Moving title: %s into existing: %s", path, new_path) + # Take all files and subfolder from old path (strange command) + for dir_name, __, file_list in os.walk(path): + for file in file_list: + shutil.move(os.path.normcase(os.path.join(dir_name, file)), + os.path.normcase(os.path.join(new_path + dir_name[len(path):], file))) + # os.unlink(os.path.normcase(os.path.join(dir_name, file))) + # change location in database to new author/title path + localbook.path = os.path.join(new_authordir, new_titledir).replace('\\','/') except OSError as ex: - log.error("Rename title from: %s to %s: %s", path, new_title_path, ex) + log.error("Rename title from: %s to %s: %s", path, new_path, ex) log.debug(ex, exc_info=True) return _("Rename title from: '%(src)s' to '%(dest)s' failed with error: %(error)s", - src=path, dest=new_title_path, error=str(ex)) - if authordir != new_authordir: - new_author_path = os.path.join(calibrepath, new_authordir, os.path.basename(path)) + src=path, dest=new_path, error=str(ex)) + + # Rename all files from old names to new names try: - os.renames(os.path.normcase(path), os.path.normcase(new_author_path)) - localbook.path = new_authordir + '/' + localbook.path.split('/')[1] - except OSError as ex: - log.error("Rename author from: %s to %s: %s", path, new_author_path, ex) - log.debug(ex, exc_info=True) - return _("Rename author from: '%(src)s' to '%(dest)s' failed with error: %(error)s", - src=path, dest=new_author_path, error=str(ex)) - # Rename all files from old names to new names - if authordir != new_authordir or titledir != new_titledir: - new_name = "" - try: - new_name = get_valid_filename(localbook.title) + ' - ' + get_valid_filename(new_authordir) - path_name = os.path.join(calibrepath, new_authordir, os.path.basename(path)) for file_format in localbook.data: - os.renames(os.path.normcase( - os.path.join(path_name, file_format.name + '.' + file_format.format.lower())), - os.path.normcase(os.path.join(path_name, new_name + '.' + file_format.format.lower()))) + shutil.move(os.path.normcase( + os.path.join(new_path, file_format.name + '.' + file_format.format.lower())), + os.path.normcase(os.path.join(new_path, new_name + '.' + file_format.format.lower()))) file_format.name = new_name + if not orignal_filepath and len(os.listdir(os.path.dirname(path))) == 0: + shutil.rmtree(os.path.dirname(path)) except OSError as ex: - log.error("Rename file in path %s to %s: %s", path, new_name, ex) + log.error("Rename file in path %s to %s: %s", new_path, new_name, ex) log.debug(ex, exc_info=True) return _("Rename file in path '%(src)s' to '%(dest)s' failed with error: %(error)s", - src=path, dest=new_name, error=str(ex)) + src=new_path, dest=new_name, error=str(ex)) return False - def update_dir_structure_gdrive(book_id, first_author): error = False book = calibre_db.get_book(book_id) @@ -505,11 +526,11 @@ def uniq(inpt): # ################################# External interface ################################# -def update_dir_stucture(book_id, calibrepath, first_author=None): +def update_dir_stucture(book_id, calibrepath, first_author=None, orignal_filepath=None, db_filename=None): if config.config_use_google_drive: return update_dir_structure_gdrive(book_id, first_author) else: - return update_dir_structure_file(book_id, calibrepath, first_author) + return update_dir_structure_file(book_id, calibrepath, first_author, orignal_filepath, db_filename) def delete_book(book, calibrepath, book_format): @@ -550,8 +571,7 @@ def get_book_cover_internal(book, use_generic_cover_on_failure): log.error('%s/cover.jpg not found on Google Drive', book.path) return get_cover_on_failure(use_generic_cover_on_failure) except Exception as e: - log.exception(e) - # traceback.print_exc() + log.debug_or_exception(e) return get_cover_on_failure(use_generic_cover_on_failure) else: cover_file_path = os.path.join(config.config_calibre_dir, book.path) @@ -574,29 +594,35 @@ def save_cover_from_url(url, book_path): requests.exceptions.Timeout) as ex: log.info(u'Cover Download Error %s', ex) return False, _("Error Downloading Cover") - except UnidentifiedImageError as ex: + except MissingDelegateError as ex: log.info(u'File Format Error %s', ex) return False, _("Cover Format Error") def save_cover_from_filestorage(filepath, saved_filename, img): - if hasattr(img, '_content'): - f = open(os.path.join(filepath, saved_filename), "wb") - f.write(img._content) - f.close() - else: - # check if file path exists, otherwise create it, copy file to calibre path and delete temp file - if not os.path.exists(filepath): - try: - os.makedirs(filepath) - except OSError: - log.error(u"Failed to create path for cover") - return False, _(u"Failed to create path for cover") + # check if file path exists, otherwise create it, copy file to calibre path and delete temp file + if not os.path.exists(filepath): try: - img.save(os.path.join(filepath, saved_filename)) - except (IOError, OSError): - log.error(u"Cover-file is not a valid image file, or could not be stored") - return False, _(u"Cover-file is not a valid image file, or could not be stored") + os.makedirs(filepath) + except OSError: + log.error(u"Failed to create path for cover") + return False, _(u"Failed to create path for cover") + try: + # upload of jgp file without wand + if isinstance(img, requests.Response): + with open(os.path.join(filepath, saved_filename), 'wb') as f: + f.write(img.content) + else: + if hasattr(img, "metadata"): + # upload of jpg/png... via url + img.save(filename=os.path.join(filepath, saved_filename)) + img.close() + else: + # upload of jpg/png... from hdd + img.save(os.path.join(filepath, saved_filename)) + except (IOError, OSError): + log.error(u"Cover-file is not a valid image file, or could not be stored") + return False, _(u"Cover-file is not a valid image file, or could not be stored") return True, None @@ -604,31 +630,33 @@ def save_cover_from_filestorage(filepath, saved_filename, img): def save_cover(img, book_path): content_type = img.headers.get('content-type') - if use_PIL: - if content_type not in ('image/jpeg', 'image/png', 'image/webp'): - log.error("Only jpg/jpeg/png/webp files are supported as coverfile") - return False, _("Only jpg/jpeg/png/webp files are supported as coverfile") + if use_IM: + if content_type not in ('image/jpeg', 'image/png', 'image/webp', 'image/bmp'): + log.error("Only jpg/jpeg/png/webp/bmp files are supported as coverfile") + return False, _("Only jpg/jpeg/png/webp/bmp files are supported as coverfile") # convert to jpg because calibre only supports jpg - if content_type in ('image/png', 'image/webp'): + if content_type != 'image/jpg': if hasattr(img, 'stream'): - imgc = PILImage.open(img.stream) + imgc = Image(blob=img.stream) else: - imgc = PILImage.open(io.BytesIO(img.content)) - im = imgc.convert('RGB') - tmp_bytesio = io.BytesIO() - im.save(tmp_bytesio, format='JPEG') - img._content = tmp_bytesio.getvalue() + imgc = Image(blob=io.BytesIO(img.content)) + imgc.format = 'jpeg' + imgc.transform_colorspace("rgb") + img = imgc else: if content_type not in 'image/jpeg': log.error("Only jpg/jpeg files are supported as coverfile") return False, _("Only jpg/jpeg files are supported as coverfile") if config.config_use_google_drive: - tmpDir = gettempdir() - ret, message = save_cover_from_filestorage(tmpDir, "uploaded_cover.jpg", img) + tmp_dir = os.path.join(gettempdir(), 'calibre_web') + + if not os.path.isdir(tmp_dir): + os.mkdir(tmp_dir) + ret, message = save_cover_from_filestorage(tmp_dir, "uploaded_cover.jpg", img) if ret is True: - gd.uploadFileToEbooksFolder(os.path.join(book_path, 'cover.jpg'), - os.path.join(tmpDir, "uploaded_cover.jpg")) + gd.uploadFileToEbooksFolder(os.path.join(book_path, 'cover.jpg').replace("\\","/"), + os.path.join(tmp_dir, "uploaded_cover.jpg")) log.info("Cover is saved on Google Drive") return True, None else: @@ -682,7 +710,7 @@ def check_unrar(unrarLocation): log.debug("unrar version %s", version) break except (OSError, UnicodeDecodeError) as err: - log.exception(err) + log.debug_or_exception(err) return _('Error excecuting UnRar') @@ -722,47 +750,30 @@ def format_runtime(runtime): # helper function to apply localize status information in tasklist entries def render_task_status(tasklist): renderedtasklist = list() - for task in tasklist: - if task['user'] == current_user.nickname or current_user.role_admin(): - if task['formStarttime']: - task['starttime'] = format_datetime(task['formStarttime'], format='short', locale=get_locale()) - # task2['formStarttime'] = "" - else: - if 'starttime' not in task: - task['starttime'] = "" - - if 'formRuntime' not in task: - task['runtime'] = "" - else: - task['runtime'] = format_runtime(task['formRuntime']) + for __, user, added, task in tasklist: + if user == current_user.nickname or current_user.role_admin(): + ret = {} + if task.start_time: + ret['starttime'] = format_datetime(task.start_time, format='short', locale=get_locale()) + ret['runtime'] = format_runtime(task.runtime) # localize the task status - if isinstance(task['stat'], int): - if task['stat'] == STAT_WAITING: - task['status'] = _(u'Waiting') - elif task['stat'] == STAT_FAIL: - task['status'] = _(u'Failed') - elif task['stat'] == STAT_STARTED: - task['status'] = _(u'Started') - elif task['stat'] == STAT_FINISH_SUCCESS: - task['status'] = _(u'Finished') + if isinstance(task.stat, int): + if task.stat == STAT_WAITING: + ret['status'] = _(u'Waiting') + elif task.stat == STAT_FAIL: + ret['status'] = _(u'Failed') + elif task.stat == STAT_STARTED: + ret['status'] = _(u'Started') + elif task.stat == STAT_FINISH_SUCCESS: + ret['status'] = _(u'Finished') else: - task['status'] = _(u'Unknown Status') + ret['status'] = _(u'Unknown Status') - # localize the task type - if isinstance(task['taskType'], int): - if task['taskType'] == TASK_EMAIL: - task['taskMessage'] = _(u'E-mail: ') + task['taskMess'] - elif task['taskType'] == TASK_CONVERT: - task['taskMessage'] = _(u'Convert: ') + task['taskMess'] - elif task['taskType'] == TASK_UPLOAD: - task['taskMessage'] = _(u'Upload: ') + task['taskMess'] - elif task['taskType'] == TASK_CONVERT_ANY: - task['taskMessage'] = _(u'Convert: ') + task['taskMess'] - else: - task['taskMessage'] = _(u'Unknown Task: ') + task['taskMess'] - - renderedtasklist.append(task) + ret['taskMessage'] = "{}: {}".format(_(task.name), task.message) + ret['progress'] = "{} %".format(int(task.progress * 100)) + ret['user'] = user + renderedtasklist.append(ret) return renderedtasklist diff --git a/cps/isoLanguages.py b/cps/isoLanguages.py index d8b7fa00..896d4faf 100644 --- a/cps/isoLanguages.py +++ b/cps/isoLanguages.py @@ -66,3 +66,27 @@ def get_language_codes(locale, language_names, remainder=None): if remainder is not None: remainder.extend(language_names) return languages + +def get_valid_language_codes(locale, language_names, remainder=None): + languages = list() + if "" in language_names: + language_names.remove("") + for k, __ in get_language_names(locale).items(): + if k in language_names: + languages.append(k) + language_names.remove(k) + if remainder is not None and len(language_names): + remainder.extend(language_names) + return languages + +def get_lang3(lang): + try: + if len(lang) == 2: + ret_value = get(part1=lang).part3 + elif len(lang) == 3: + ret_value = lang + else: + ret_value = "" + except KeyError: + ret_value = lang + return ret_value diff --git a/cps/iso_language_names.py b/cps/iso_language_names.py index 2e6ec680..98368449 100644 --- a/cps/iso_language_names.py +++ b/cps/iso_language_names.py @@ -14,996 +14,996 @@ from __future__ import unicode_literals # map iso639 language codes to language names, translated LANGUAGE_NAMES = { - "pl": { - "aar": "afarski", - "abk": "abchaski", - "ace": "aczineski", - "ach": "aczoli", - "ada": "adangme", - "ady": "adygejski", - "afh": "afrihili", - "afr": "afrykanerski", - "ain": "ajnoski (Japonia)", - "aka": "akan", - "akk": "akadyjski", - "ale": "aleucki", - "alt": "ałtajski południowy", - "amh": "amharski", - "ang": "Staroangielski (ok. 450-1100)", + "cs": { + "aar": "afarština", + "abk": "abchazajština", + "ace": "atěžština", + "ach": "ačoli (luoština)", + "ada": "adangmeština", + "ady": "adyghe", + "afh": "afrihilijština", + "afr": "afrikánština", + "ain": "ainu (Japonsko)", + "aka": "akanština", + "akk": "akkadština", + "ale": "aleutština", + "alt": "altajština; jižní", + "amh": "Amharština", + "ang": "Angličtina; stará (asi 450-1100)", "anp": "angika", - "ara": "arabski", - "arc": "aramejski oficjalny (700-300 p.n.e.)", - "arg": "aragoński", - "arn": "araukański", + "ara": "arabština", + "arc": "aramejština; oficiální (700-300 př. n. l.)", + "arg": "aragonská španělština", + "arn": "mapudungun", "arp": "arapaho", - "arw": "arawak", - "asm": "asamski", - "ast": "asturyjski", - "ava": "awarski", - "ave": "awestyjski", - "awa": "awadhi", - "aym": "ajmara", - "aze": "azerski", - "bak": "baszkirski", - "bal": "baluczi", - "bam": "bambara", - "ban": "balijski", + "arw": "arawacké jazyky", + "asm": "ásámština", + "ast": "asturština", + "ava": "avarština", + "ave": "avestština", + "awa": "avadhština (avadhí)", + "aym": "aymarština", + "aze": "azerbajdžánština", + "bak": "Baskirština", + "bal": "balúčština", + "bam": "bambarština", + "ban": "balijština", "bas": "basa (Kamerun)", - "bej": "bedża", - "bel": "białoruski", - "bem": "bemba (Zambia)", - "ben": "bengalski", - "bho": "bhodźpuri", - "bik": "bikol", - "bin": "edo", - "bis": "bislama", + "bej": "bedža", + "bel": "běloruština", + "bem": "bemba (Zambie)", + "ben": "bengálština", + "bho": "bhódžpurština", + "bik": "bikolština", + "bin": "bini", + "bis": "bislamština", "bla": "siksika", - "bod": "tybetański", - "bos": "bośniacki", - "bra": "bradź", - "bre": "bretoński", - "bua": "buriacki", - "bug": "bugijski", - "bul": "bułgarski", - "byn": "blin", - "cad": "kaddo", - "car": "karaibski galibi", - "cat": "kataloński", - "ceb": "cebuański", - "ces": "czeski", - "cha": "czamorro", - "chb": "czibcza", - "che": "czeczeński", - "chg": "czagatajski", - "chk": "chuuk", - "chm": "maryjski (Rosja)", - "chn": "żargon chinoocki", - "cho": "czoktaw", - "chp": "chipewyan", - "chr": "czerokeski", - "chu": "starosłowiański", - "chv": "czuwaski", - "chy": "czejeński", - "cop": "koptyjski", - "cor": "kornijski", - "cos": "korsykański", - "cre": "kri", - "crh": "krymskotatarski", - "csb": "kaszubski", - "cym": "walijski", - "dak": "dakota", - "dan": "duński", - "dar": "dargwijski", - "del": "delaware", - "den": "slavey (atapaskański)", - "deu": "niemiecki", + "bod": "tibetština", + "bos": "bosenština", + "bra": "bradžština", + "bre": "bretonština", + "bua": "burjatština", + "bug": "bugiština", + "bul": "bulharština", + "byn": "bilin", + "cad": "caddo", + "car": "carib; Galibi", + "cat": "katalánština", + "ceb": "cebuánština", + "ces": "Čeština", + "cha": "čamoro", + "chb": "čibča", + "che": "čečenština", + "chg": "Chagatai", + "chk": "čukčtina", + "chm": "Mari (Russia)", + "chn": "činuk pidžin", + "cho": "choctawština", + "chp": "čipeva", + "chr": "čerokézština", + "chu": "Slavonic; Old", + "chv": "čuvaština", + "chy": "čejenština", + "cop": "koptština", + "cor": "kornština", + "cos": "korsičtina", + "cre": "krí", + "crh": "Turkish; Crimean", + "csb": "kašubština", + "cym": "velština", + "dak": "dakotština", + "dan": "dánština", + "dar": "dargwa", + "del": "delawarština", + "den": "atabaské jazyky", + "deu": "Němčina", "dgr": "dogrib", - "din": "dinka", - "div": "malediwski; divehi", - "doi": "dogri (makrojęzyk)", - "dsb": "dolnołużycki", - "dua": "duala", - "dum": "holenderski średniowieczny (ok. 1050-1350)", - "dyu": "diula", - "dzo": "dzongka", + "din": "dinkština", + "div": "Dhivehi", + "doi": "Dogri (macrolanguage)", + "dsb": "Sorbian; Lower", + "dua": "dualština", + "dum": "Dutch; Middle (ca. 1050-1350)", + "dyu": "djula", + "dzo": "Bhútánština", "efi": "efik", - "egy": "egipski (starożytny)", + "egy": "egyptština (starověká)", "eka": "ekajuk", - "ell": "grecki współczesny (1453-)", - "elx": "elamicki", - "eng": "Angielski", - "enm": "angielski średniowieczny (1100-1500)", + "ell": "řečtina; moderní (1453-)", + "elx": "elamština", + "eng": "Angličtina", + "enm": "Angličtina; středověká (1100-1500)", "epo": "esperanto", - "est": "estoński", - "eus": "baskijski", - "ewe": "ewe", + "est": "estonština", + "eus": "baskičtina", + "ewe": "eweština", "ewo": "ewondo", - "fan": "fang (Gwinea Równikowa)", - "fao": "farerski", - "fas": "perski", - "fat": "fanti", - "fij": "fidżyjski", - "fil": "pilipino", - "fin": "fiński", - "fon": "fon", - "fra": "francuski", - "frm": "francuski średniowieczny (ok. 1400-1600)", - "fro": "starofrancuski (842-ok. 1400)", - "frr": "północnofryzyjski", - "frs": "wschodniofryzyjski", - "fry": "zachodniofryzyjski", - "ful": "fulani", - "fur": "friulski", + "fan": "Fang (Equatorial Guinea)", + "fao": "faerština", + "fas": "perština", + "fat": "fantiština", + "fij": "Fidži", + "fil": "Filipino", + "fin": "finština", + "fon": "fonština", + "fra": "francouzština", + "frm": "French; Middle (ca. 1400-1600)", + "fro": "French; Old (842-ca. 1400)", + "frr": "Frisian; Northern", + "frs": "Frisian; Eastern", + "fry": "Frisian; Western", + "ful": "fulahština", + "fur": "furlanština", "gaa": "ga", "gay": "gayo", - "gba": "gbaya (Republika Środkowoafrykańska)", - "gez": "gyyz", - "gil": "gilbertański", - "gla": "szkocki gaelicki", - "gle": "irlandzki", - "glg": "galicyjski", - "glv": "manx", - "gmh": "średnio-wysoko-niemiecki (ok. 1050-1500)", - "goh": "staro-wysoko-niemiecki (ok. 750-1050)", - "gon": "gondi", + "gba": "Gbaya (Central African Republic)", + "gez": "etiopština", + "gil": "kiribatština", + "gla": "Gaelic; Scottish", + "gle": "irština", + "glg": "Galician", + "glv": "manština", + "gmh": "German; Middle High (ca. 1050-1500)", + "goh": "German; Old High (ca. 750-1050)", + "gon": "góndština", "gor": "gorontalo", - "got": "gocki", + "got": "gótština", "grb": "grebo", - "grc": "grecki starożytny (do 1453)", - "grn": "guarani", - "gsw": "niemiecki szwajcarski", - "guj": "gudźarati", - "gwi": "gwichʼin", + "grc": "řečtina; starověká (do 1453)", + "grn": "Guaranština", + "gsw": "German; Swiss", + "guj": "Gudžarátština", + "gwi": "Gwichʼin", "hai": "haida", - "hat": "kreolski haitański", - "hau": "hausa", - "haw": "hawajski", - "heb": "hebrajski", + "hat": "Creole; Haitian", + "hau": "Hausa", + "haw": "havajština", + "heb": "hebrejština", "her": "herero", - "hil": "hiligajnon", - "hin": "hindi", - "hit": "hetycki", - "hmn": "hmong", + "hil": "hiligayonština", + "hin": "hindština", + "hit": "chetitština", + "hmn": "hmongština", "hmo": "hiri motu", - "hrv": "chorwacki", - "hsb": "górnołużycki", - "hun": "węgierski", + "hrv": "chorvatština", + "hsb": "Sorbian; Upper", + "hun": "maďarština", "hup": "hupa", - "hye": "ormiański", - "iba": "ibanag", - "ibo": "ibo", + "hye": "arménština", + "iba": "iban", + "ibo": "igbo", "ido": "ido", - "iii": "syczuański", - "iku": "inuktitut", - "ile": "interlingue", - "ilo": "ilokano", - "ina": "interlingua (Międzynarodowe Stowarzyszenie Języka Pomocniczego)", - "ind": "indonezyjski", - "inh": "inguski", - "ipk": "inupiaq", - "isl": "islandzki", - "ita": "włoski", - "jav": "jawajski", + "iii": "Yi; Sichuan", + "iku": "Inuktitutština", + "ile": "Interlingue", + "ilo": "ilokánština", + "ina": "Interlingua (Mezinárodní pomocná jazyková asociace)", + "ind": "indonézština", + "inh": "inguština", + "ipk": "Inupiakština", + "isl": "islandština", + "ita": "italština", + "jav": "jávština", "jbo": "lojban", - "jpn": "japoński", - "jpr": "judeo-perski", - "jrb": "judeoarabski", - "kaa": "karakałpacki", - "kab": "kabylski", - "kac": "kaczin", - "kal": "kalaallisut", - "kam": "kamba (Kenia)", - "kan": "kannada", - "kas": "kaszmirski", - "kat": "gruziński", + "jpn": "japonština", + "jpr": "judeo-perština", + "jrb": "judeo-arabština", + "kaa": "karakalpačtina", + "kab": "kabulí", + "kac": "kačjinština", + "kal": "Kalaallisut", + "kam": "Kamba (Kenya)", + "kan": "Kannadština", + "kas": "kašmírština", + "kat": "Gruzínština", "kau": "kanuri", "kaw": "kawi", - "kaz": "kazaski", - "kbd": "kabardyjski", - "kha": "khasi", - "khm": "środkowokhmerski", - "kho": "chotański", - "kik": "kikiju", - "kin": "ruanda", - "kir": "kirgiski", - "kmb": "kimbundu", - "kok": "konkani (makrojęzyk)", - "kom": "komi", - "kon": "kongo", - "kor": "koreański", - "kos": "kosrae", + "kaz": "Kazachština", + "kbd": "kabardština", + "kha": "Khasi", + "khm": "Khmer; Central", + "kho": "chotánština", + "kik": "Kikuyu", + "kin": "Kinyarwandština", + "kir": "Kirgizština", + "kmb": "kimbundština", + "kok": "Konkani (macrolanguage)", + "kom": "komijština", + "kon": "Kongo", + "kor": "korejština", + "kos": "kosrajština", "kpe": "kpelle", - "krc": "karaczajsko-bałkarski", - "krl": "karelski", + "krc": "karachay-balkarština", + "krl": "karelština", "kru": "kurukh", - "kua": "kwanyama", - "kum": "kumycki", - "kur": "kurdyjski", + "kua": "Kuanyama", + "kum": "kumyčtina", + "kur": "kurdština", "kut": "kutenai", "lad": "ladino", "lah": "lahnda", - "lam": "lamba", - "lao": "laotański", - "lat": "łaciński", - "lav": "łotewski", - "lez": "lezgiński", - "lim": "limburgijski", - "lin": "lingala", - "lit": "litewski", - "lol": "mongo", - "loz": "lozi", - "ltz": "luksemburski", - "lua": "luba-lulua", - "lub": "luba-katanga", - "lug": "luganda", - "lui": "luiseno", - "lun": "lunda", - "luo": "luo (Kenia i Tanzania)", - "lus": "lushai", - "mad": "madurajski", - "mag": "magahi", - "mah": "marshalski", - "mai": "maithili", - "mak": "makasar", - "mal": "malajalam", - "man": "mandingo", - "mar": "marathi", - "mas": "masajski", - "mdf": "moksza", - "mdr": "mandar", - "men": "mende (Sierra Leone)", - "mga": "irlandzki średniowieczny (900-1200)", - "mic": "micmac", + "lam": "lambština", + "lao": "Laoština", + "lat": "latina", + "lav": "Latvian", + "lez": "lezgiština", + "lim": "Limburgan", + "lin": "Ngalština", + "lit": "litevština", + "lol": "mongština", + "loz": "lozština", + "ltz": "Luxembourgish", + "lua": "luba-luluaština", + "lub": "lubu-katanžština", + "lug": "Ganda", + "lui": "luiseňo", + "lun": "lundština", + "luo": "luoština (Keňa a Tanzanie)", + "lus": "lušáí", + "mad": "madurština", + "mag": "magahština", + "mah": "Marshallese", + "mai": "maithilština", + "mak": "makasarština", + "mal": "Malabarština", + "man": "mandingština", + "mar": "maráthština", + "mas": "masajština", + "mdf": "moksha", + "mdr": "mandarínština", + "men": "Mende (Sierra Leone)", + "mga": "irština; středověká (900-1200)", + "mic": "Mi'kmaq", "min": "minangkabau", - "mis": "języki niezakodowane", - "mkd": "macedoński", - "mlg": "malgaski", - "mlt": "maltański", - "mnc": "mandżurski", - "mni": "manipuri", + "mis": "Uncoded languages", + "mkd": "makedonština", + "mlg": "Malgaština", + "mlt": "maltézština", + "mnc": "manchu", + "mni": "manipurština", "moh": "mohawk", - "mon": "mongolski", - "mos": "mossi", - "mri": "maoryski", - "msa": "malajski (makrojęzyk)", - "mul": "wiele języków", - "mus": "krik", - "mwl": "mirandyjski", - "mwr": "marwari", - "mya": "birmański", - "myv": "erzja", - "nap": "neapolitański", - "nau": "nauruański", - "nav": "navaho", - "nbl": "ndebele południowy", - "nde": "ndebele północny", - "ndo": "ndonga", + "mon": "mongolština", + "mos": "mosi", + "mri": "maorština", + "msa": "Malay (macrolanguage)", + "mul": "násobné jazyky", + "mus": "krík", + "mwl": "mirandština", + "mwr": "márvárština", + "mya": "Barmština", + "myv": "erzya", + "nap": "neapolština", + "nau": "naurština", + "nav": "navažština", + "nbl": "Ndebele; South", + "nde": "Ndebele; North", + "ndo": "ndondština", "nds": "German; Low", - "nep": "nepalski", - "new": "newarski", + "nep": "nepálština", + "new": "Bhasa; Nepal", "nia": "nias", "niu": "niue", - "nld": "holenderski", - "nno": "norweski Nynorsk", - "nob": "norweski Bokmål", - "nog": "nogajski", - "non": "staronordyjski", - "nor": "norweski", - "nqo": "n’ko", - "nso": "sotho północny", - "nwc": "newarski klasyczny", - "nya": "njandża", - "nym": "nyamwezi", - "nyn": "nyankole", - "nyo": "nyoro", - "nzi": "nzema", - "oci": "okcytański (po 1500)", - "oji": "odżibwe", + "nld": "holandština", + "nno": "Norwegian Nynorsk", + "nob": "Norwegian Bokmål", + "nog": "nogai", + "non": "norština; stará", + "nor": "norština", + "nqo": "N'Ko", + "nso": "Sotho; Northern", + "nwc": "Newari; Old", + "nya": "Nyanja", + "nym": "ňamwežština", + "nyn": "nyankolština", + "nyo": "Nyoro", + "nzi": "nzima", + "oci": "Occitan (post 1500)", + "oji": "Ojibwa", "ori": "orija", - "orm": "oromo", - "osa": "osage", - "oss": "osetyjski", - "ota": "turecki otomański (1500-1928)", - "pag": "pangasino", - "pal": "pahlawi", - "pam": "pampango", - "pan": "pendżabski", + "orm": "Oromo (Afan)", + "osa": "osagština", + "oss": "Ossetian", + "ota": "turečtina; osmanská (1500-1928)", + "pag": "pangsinan", + "pal": "pahlaví", + "pam": "pampangau", + "pan": "Panjabi", "pap": "papiamento", - "pau": "palau", - "peo": "staroperski (ok. 600-400 p.n.e)", - "phn": "fenicki", - "pli": "pali", - "pol": "Polski", - "pon": "pohnpei", - "por": "portugalski", - "pro": "prowansalski średniowieczny (do 1500)", - "pus": "paszto", - "que": "keczua", - "raj": "radźasthani", - "rap": "rapanui", - "rar": "maoryski Wysp Cooka", - "roh": "retoromański", - "rom": "romski", - "ron": "rumuński", - "run": "rundi", - "rup": "arumuński", - "rus": "rosyjski", - "sad": "sandawe", - "sag": "sango", - "sah": "jakucki", - "sam": "samarytański aramejski", - "san": "sanskryt", - "sas": "sasak", - "sat": "santali", - "scn": "sycylijski", - "sco": "scots", - "sel": "selkupski", - "sga": "staroirlandzki (do 900)", - "shn": "szan", + "pau": "palauština", + "peo": "Persian; Old (ca. 600-400 B.C.)", + "phn": "Slovinština", + "pli": "páli", + "pol": "Polština", + "pon": "pohnpeiština", + "por": "portugalština", + "pro": "provensálština; stará (do 1500)", + "pus": "pašto", + "que": "kečuánština", + "raj": "rádžasthánština", + "rap": "rapanuiština", + "rar": "Maori; Cook Islands", + "roh": "Romansh", + "rom": "římština", + "ron": "rumunština", + "run": "Kirundi", + "rup": "Romanian; Macedo-", + "rus": "Ruština", + "sad": "sandawština", + "sag": "sangoština", + "sah": "jakutština", + "sam": "Aramaic; Samaritan", + "san": "sanskrt", + "sas": "sačtina", + "sat": "santálí", + "scn": "sicilština", + "sco": "skotština", + "sel": "selkupština", + "sga": "irština; stará (do 900)", + "shn": "šanština", "sid": "sidamo", - "sin": "syngaleski", - "slk": "słowacki", - "slv": "słoweński", - "sma": "południowolapoński", - "sme": "północnolapoński", - "smj": "lapoński lule", - "smn": "lapoński inari", - "smo": "samoański", - "sms": "lapoński skolt", - "sna": "shona", - "snd": "sindhi", - "snk": "soninke", - "sog": "sogdiański", - "som": "somalijski", - "sot": "sotho południowy", - "spa": "hiszpański", - "sqi": "albański", - "srd": "sardyński", - "srn": "sranan tongo", - "srp": "serbski", - "srr": "serer", - "ssw": "suazi", + "sin": "Sinhálština", + "slk": "Slovenština", + "slv": "slovinština", + "sma": "Sami; Southern", + "sme": "Sami; Northern", + "smj": "lule sami", + "smn": "Sami; Inari", + "smo": "Samoyština", + "sms": "Sami; Skolt", + "sna": "šonština", + "snd": "sindhština", + "snk": "sonikština", + "sog": "sogdijština", + "som": "somálština", + "sot": "sotština; jižní", + "spa": "španělština", + "sqi": "albánština", + "srd": "sardinština", + "srn": "Sranan Tongo", + "srp": "srbština", + "srr": "Serer", + "ssw": "Siswatština", "suk": "sukuma", - "sun": "sundajski", + "sun": "Sundanština", "sus": "susu", - "sux": "sumeryjski", - "swa": "suahili (makrojęzyk)", - "swe": "szwedzki", - "syc": "syryjski klasyczny", - "syr": "syryjski", - "tah": "tahitański", - "tam": "tamilski", - "tat": "tatarski", - "tel": "telugu", + "sux": "sumerština", + "swa": "svahilština (makrojazyk)", + "swe": "švédština", + "syc": "Syriac; Classical", + "syr": "syrština", + "tah": "tahitština", + "tam": "Tamilština", + "tat": "tatarština", + "tel": "Telugu", "tem": "temne", "ter": "tereno", - "tet": "tetum", - "tgk": "tadżycki", - "tgl": "tagalski", - "tha": "tajski", - "tig": "tigre", - "tir": "tigrinia", - "tiv": "tiw", - "tkl": "tokelau", - "tlh": "klingoński", + "tet": "tetumština", + "tgk": "Tádžičtina", + "tgl": "Tagalog", + "tha": "thajština", + "tig": "tigrejština", + "tir": "Tigrinijština", + "tiv": "tivština", + "tkl": "tokelauština", + "tlh": "Klingon", "tli": "tlingit", - "tmh": "tuareski", - "tog": "tongański (Nyasa)", - "ton": "tongański (Wyspy Tonga)", + "tmh": "Tamashek", + "tog": "tongština (nyasa)", + "ton": "Tonga", "tpi": "tok pisin", - "tsi": "tsimszian", - "tsn": "tswana", - "tso": "tsonga", - "tuk": "turkmeński", - "tum": "tumbuka", - "tur": "turecki", - "tvl": "tuvalu", - "twi": "twi", - "tyv": "tuwiński", - "udm": "udmurcki", - "uga": "ugarycki", - "uig": "ujgurski", - "ukr": "ukraiński", + "tsi": "tsimshijské jazyky", + "tsn": "Setswanština", + "tso": "Tsonga", + "tuk": "turkmenistánština", + "tum": "tumbukština", + "tur": "turečtina", + "tvl": "tuvalština", + "twi": "ťwiština", + "tyv": "tuvština", + "udm": "udmurtština", + "uga": "ugaritština", + "uig": "Uighurština", + "ukr": "ukrajinština", "umb": "umbundu", - "und": "nieokreślony", - "urd": "urdu", - "uzb": "uzbecki", - "vai": "wai", - "ven": "venda", - "vie": "wietnamski", - "vol": "wolapik", - "vot": "wotycki", - "wal": "walamo", - "war": "warajski (Filipiny)", + "und": "neurčitý", + "urd": "urdština", + "uzb": "uzbekistánština", + "vai": "vai", + "ven": "vendština", + "vie": "vietnamština", + "vol": "volapük", + "vot": "votiatština", + "wal": "Wolaytta", + "war": "Waray (Philippines)", "was": "washo", - "wln": "waloński", - "wol": "wolof", - "xal": "kałmucki", - "xho": "xhosa", - "yao": "yao", - "yap": "japski", - "yid": "jidysz", - "yor": "joruba", - "zap": "zapotecki", - "zbl": "bliss", + "wln": "valonština", + "wol": "volofština", + "xal": "Kalmyk", + "xho": "xhoština", + "yao": "jaoština", + "yap": "japština", + "yid": "Jidiš", + "yor": "jorubština", + "zap": "Zapotec", + "zbl": "Blissymbols", "zen": "zenaga", - "zha": "zhuang", - "zho": "chiński", - "zul": "zuluski", - "zun": "zuni", - "zxx": "brak kontekstu językowego", - "zza": "zazaki" + "zha": "Zhuang", + "zho": "čínština", + "zul": "Zulu", + "zun": "zunijština", + "zxx": "bez lingvistického obsahu", + "zza": "zaza" }, - "nl": { - "aar": "Afar; Hamitisch", - "abk": "Abchazisch", - "ace": "Achinees", - "ach": "Acholi", - "ada": "Adangme", - "ady": "Adyghe", + "de": { + "aar": "Danakil-Sprache", + "abk": "Abchasisch", + "ace": "Aceh-Sprache", + "ach": "Acholi-Sprache", + "ada": "Adangme-Sprache", + "ady": "Adygisch", "afh": "Afrihili", "afr": "Afrikaans", - "ain": "Ainu (Japan)", - "aka": "Akaans", - "akk": "Akkadiaans", - "ale": "Aleut", - "alt": "Altajs; zuidelijk", - "amh": "Amhaars; Amharisch", - "ang": "Engels; oud (ca. 450-1100)", - "anp": "Angika", + "ain": "Ainu-Sprache (Japan)", + "aka": "Akan-Sprache", + "akk": "Akkadisch", + "ale": "Aleutisch", + "alt": "Altaisch; Süd", + "amh": "Amharisch", + "ang": "Englisch; Alt (ca. 450-1100)", + "anp": "Anga-Sprache", "ara": "Arabisch", - "arc": "Aramees; officieel (700-300 B.C.)", - "arg": "Aragonees", + "arc": "Aramäisch", + "arg": "Aragonesisch", "arn": "Mapudungun", "arp": "Arapaho", - "arw": "Arawak", - "asm": "Assamees; Assami", + "arw": "Arawakisch", + "asm": "Assamesisch", "ast": "Asturisch", - "ava": "Avaars; Awari", + "ava": "Awarisch", "ave": "Avestisch", "awa": "Awadhi", "aym": "Aymara", - "aze": "Azerbeidzjaans", - "bak": "Basjkiers; Basjkirisch", - "bal": "Balutsji; Baluchi", + "aze": "Aserbaidschanisch", + "bak": "Baschkirisch", + "bal": "Belutschisch", "bam": "Bambara", - "ban": "Balinees", - "bas": "Basa (Kameroen)", - "bej": "Beja", - "bel": "Wit-Russisch; Belarussisch", - "bem": "Bemba (Zambia)", - "ben": "Bengaals", + "ban": "Balinesisch", + "bas": "Basa (Kamerun)", + "bej": "Bedja (Bedauye)", + "bel": "Weißrussisch", + "bem": "Bemba (Sambia)", + "ben": "Bengalisch", "bho": "Bhojpuri", "bik": "Bikol", - "bin": "Bini; Edo", + "bin": "Bini", "bis": "Bislama", - "bla": "Siksika", - "bod": "Tibetaans", + "bla": "Blackfoot", + "bod": "Tibetisch", "bos": "Bosnisch", - "bra": "Braj", - "bre": "Bretons; Bretoens", - "bua": "Boeriaats", - "bug": "Buginees", - "bul": "Bulgaars", + "bra": "Braj-Bhakha", + "bre": "Bretonisch", + "bua": "Burjatisch", + "bug": "Buginesisch", + "bul": "Bulgarisch", "byn": "Bilin", "cad": "Caddo", - "car": "Caribische talen", - "cat": "Catalaans", + "car": "Karibisch; Galíbi", + "cat": "Katalanisch", "ceb": "Cebuano", - "ces": "Tsjechisch", + "ces": "Tschechisch", "cha": "Chamorro", - "chb": "Tsjibtsja", - "che": "Tsjetsjeens", - "chg": "Chagatai", - "chk": "Chukees", - "chm": "Mari (Rusland)", - "chn": "Chinook-jargon", + "chb": "Chibcha", + "che": "Tschetschenisch", + "chg": "Tschagataisch", + "chk": "Trukesisch", + "chm": "Mari (Russland)", + "chn": "Chinook", "cho": "Choctaw", - "chp": "Chipewyaans", + "chp": "Chipewyan", "chr": "Cherokee", - "chu": "Slavisch; oud (kerk)", - "chv": "Tsjoevasjisch", + "chu": "Altkirchenslawisch", + "chv": "Tschuwaschisch", "chy": "Cheyenne", "cop": "Koptisch", - "cor": "Cornisch", - "cos": "Corsicaans", + "cor": "Kornisch", + "cos": "Korsisch", "cre": "Cree", - "crh": "Turks; Crimean", - "csb": "Kasjoebiaans", - "cym": "Welsh", + "crh": "Türkisch; Krimtatarisch", + "csb": "Kaschubisch", + "cym": "Walisisch", "dak": "Dakota", - "dan": "Deens", - "dar": "Dargwa", + "dan": "Dänisch", + "dar": "Darginisch", "del": "Delaware", - "den": "Slavisch (Athapascaans)", - "deu": "Duits", + "den": "Slave (Athapaskisch)", + "deu": "Deutsch", "dgr": "Dogrib", "din": "Dinka", - "div": "Divehi", - "doi": "Dogri", - "dsb": "Sorbisch; lager", + "div": "Dhivehi", + "doi": "Dogri (Makrosprache)", + "dsb": "Sorbisch; Nieder", "dua": "Duala", - "dum": "Nederlands; middel (ca. 1050-1350)", + "dum": "Niederländisch; Mittel (ca. 1050-1350)", "dyu": "Dyula", "dzo": "Dzongkha", - "efi": "Efikisch", - "egy": "Egyptisch (antiek)", + "efi": "Efik", + "egy": "Ägyptisch (Historisch)", "eka": "Ekajuk", - "ell": "Grieks; Modern (1453-)", - "elx": "Elamitisch", - "eng": "Engels", - "enm": "Engels; middel (1100-1500)", + "ell": "Neugriechisch (ab 1453)", + "elx": "Elamisch", + "eng": "Englisch", + "enm": "Mittelenglisch", "epo": "Esperanto", - "est": "Estlands", + "est": "Estnisch", "eus": "Baskisch", - "ewe": "Ewe", + "ewe": "Ewe-Sprache", "ewo": "Ewondo", - "fan": "Fang", - "fao": "Faeröers", - "fas": "Perzisch", + "fan": "Fang (Äquatorial-Guinea)", + "fao": "Färöisch", + "fas": "Persisch", "fat": "Fanti", - "fij": "Fijisch", - "fil": "Filipijns", - "fin": "Fins", + "fij": "Fidschianisch", + "fil": "Filipino", + "fin": "Finnisch", "fon": "Fon", - "fra": "Frans", - "frm": "Frans; middel (ca. 1400-1600)", - "fro": "Frans; oud (842-ca. 1400)", - "frr": "Fries; noordelijk (Duitsland)", - "frs": "Fries; oostelijk (Duitsland)", - "fry": "Fries", - "ful": "Fulah", - "fur": "Friulisch", + "fra": "Französisch", + "frm": "Französisch; Mittel (ca. 1400 - 1600)", + "fro": "Französisch; Alt (842 - ca. 1400)", + "frr": "Friesisch; Nord", + "frs": "Friesisch; Ost", + "fry": "Friesisch; West", + "ful": "Ful", + "fur": "Friaulisch", "gaa": "Ga", "gay": "Gayo", - "gba": "Gbaya (Centraal Afrikaanse Republiek)", - "gez": "Ge'ez", - "gil": "Gilbertees", - "gla": "Keltisch; schots", - "gle": "Iers", - "glg": "Galiciaans", + "gba": "Gbaya (Zentralafrikanische Republik)", + "gez": "Altäthiopisch", + "gil": "Gilbertesisch", + "gla": "Gälisch; Schottisch", + "gle": "Irisch", + "glg": "Galicisch", "glv": "Manx", - "gmh": "Duits; middel hoog (ca. 1050-1500)", - "goh": "Duits; oud hoog (ca. 750-1050)", + "gmh": "Mittelhochdeutsch (ca. 1050-1500)", + "goh": "Althochdeutsch (ca. 750-1050)", + "gon": "Gondi", + "gor": "Gorontalesisch", + "got": "Gotisch", + "grb": "Grebo", + "grc": "Altgriechisch (bis 1453)", + "grn": "Guaraní", + "gsw": "Schweizerdeutsch", + "guj": "Gujarati", + "gwi": "Kutchin", + "hai": "Haida", + "hat": "Kreolisch; Haitisch", + "hau": "Haussa", + "haw": "Hawaiianisch", + "heb": "Hebräisch", + "her": "Herero", + "hil": "Hiligaynon", + "hin": "Hindi", + "hit": "Hethitisch", + "hmn": "Miao-Sprachen", + "hmo": "Hiri-Motu", + "hrv": "Kroatisch", + "hsb": "Obersorbisch", + "hun": "Ungarisch", + "hup": "Hupa", + "hye": "Armenisch", + "iba": "Iban", + "ibo": "Ibo", + "ido": "Ido", + "iii": "Yi; Sichuan", + "iku": "Inuktitut", + "ile": "Interlingue", + "ilo": "Ilokano", + "ina": "Interlingua (Internationale Hilfssprachen-Vereinigung)", + "ind": "Indonesisch", + "inh": "Inguschisch", + "ipk": "Inupiaq", + "isl": "Isländisch", + "ita": "Italienisch", + "jav": "Javanisch", + "jbo": "Lojban", + "jpn": "Japanisch", + "jpr": "Jüdisch-Persisch", + "jrb": "Jüdisch-Arabisch", + "kaa": "Karakalpakisch", + "kab": "Kabylisch", + "kac": "Kachinisch", + "kal": "Kalaallisut (Grönländisch)", + "kam": "Kamba (Kenia)", + "kan": "Kannada", + "kas": "Kaschmirisch", + "kat": "Georgisch", + "kau": "Kanuri", + "kaw": "Kawi; Altjavanisch", + "kaz": "Kasachisch", + "kbd": "Kabardisch", + "kha": "Khasi-Sprache", + "khm": "Khmer; Zentral", + "kho": "Sakisch", + "kik": "Kikuyu", + "kin": "Rwanda", + "kir": "Kirgisisch", + "kmb": "Mbundu; Kimbundu", + "kok": "Konkani (Makrosprache)", + "kom": "Komi", + "kon": "Kongo", + "kor": "Koreanisch", + "kos": "Kosraeanisch", + "kpe": "Kpelle", + "krc": "Karachay-Balkar", + "krl": "Karenisch", + "kru": "Kurukh", + "kua": "Kwanyama", + "kum": "Kumükisch", + "kur": "Kurdisch", + "kut": "Kutenai", + "lad": "Judenspanisch", + "lah": "Lahnda", + "lam": "Banjari; Lamba", + "lao": "Laotisch", + "lat": "Lateinisch", + "lav": "Lettisch", + "lez": "Lesgisch", + "lim": "Limburgisch", + "lin": "Lingala", + "lit": "Litauisch", + "lol": "Mongo", + "loz": "Rotse", + "ltz": "Luxemburgisch", + "lua": "Luba-Lulua", + "lub": "Luba-Katanga", + "lug": "Ganda", + "lui": "Luiseno", + "lun": "Lunda", + "luo": "Luo (Kenia und Tansania)", + "lus": "Lushai", + "mad": "Maduresisch", + "mag": "Khotta", + "mah": "Marshallesisch", + "mai": "Maithili", + "mak": "Makassarisch", + "mal": "Malayalam", + "man": "Mande; Mandigo; Malinke", + "mar": "Marathi", + "mas": "Massai", + "mdf": "Moksha", + "mdr": "Mandaresisch", + "men": "Mende (Sierra Leone)", + "mga": "Mittelirisch (900-1200)", + "mic": "Mikmak", + "min": "Minangkabau", + "mis": "Nichtklassifizierte Sprachen", + "mkd": "Makedonisch", + "mlg": "Madegassisch", + "mlt": "Maltesisch", + "mnc": "Manchu; Mandschurisch", + "mni": "Meithei-Sprache", + "moh": "Mohawk", + "mon": "Mongolisch", + "mos": "Mossi", + "mri": "Maori", + "msa": "Malaiisch (Makrosprache)", + "mul": "Mehrsprachig; Polyglott", + "mus": "Muskogee", + "mwl": "Mirandesisch", + "mwr": "Marwari", + "mya": "Burmesisch", + "myv": "Erzya", + "nap": "Neapolitanisch", + "nau": "Nauruanisch", + "nav": "Navajo", + "nbl": "Ndebele (Süd)", + "nde": "Ndebele (Nord)", + "ndo": "Ndonga", + "nds": "Plattdeutsch", + "nep": "Nepali", + "new": "Bhasa; Nepalesisch", + "nia": "Nias", + "niu": "Niue", + "nld": "Niederländisch", + "nno": "Nynorsk (Norwegen)", + "nob": "Norwegisch-Bokmål", + "nog": "Nogai", + "non": "Altnordisch", + "nor": "Norwegisch", + "nqo": "N'Ko", + "nso": "Sotho; Nord", + "nwc": "Newari; Alt", + "nya": "Nyanja", + "nym": "Nyamwezi", + "nyn": "Nyankole", + "nyo": "Nyoro", + "nzi": "Nzima", + "oci": "Okzitanisch (nach 1500)", + "oji": "Ojibwa", + "ori": "Orija", + "orm": "Oromo", + "osa": "Osage", + "oss": "Ossetisch", + "ota": "Ottomanisch (Osmanisch/Türkisch) (1500-1928)", + "pag": "Pangasinan", + "pal": "Mittelpersisch", + "pam": "Pampanggan", + "pan": "Panjabi", + "pap": "Papiamento", + "pau": "Palau", + "peo": "Persisch; Alt (ca. 600-400 v.Chr.)", + "phn": "Phönikisch", + "pli": "Pali", + "pol": "Polnisch", + "pon": "Ponapeanisch", + "por": "Portugiesisch", + "pro": "Altokzitanisch; Altprovenzalisch (bis 1500)", + "pus": "Paschtu; Afghanisch", + "que": "Ketschua", + "raj": "Rajasthani", + "rap": "Osterinsel-Sprache; Rapanui", + "rar": "Maori; Cook-Inseln", + "roh": "Bündnerromanisch", + "rom": "Romani; Zigeunersprache", + "ron": "Rumänisch", + "run": "Rundi", + "rup": "Rumänisch; Mezedonisch", + "rus": "Russisch", + "sad": "Sandawe", + "sag": "Sango", + "sah": "Jakutisch", + "sam": "Aramäisch; Samaritanisch", + "san": "Sanskrit", + "sas": "Sassak", + "sat": "Santali", + "scn": "Sizilianisch", + "sco": "Schottisch", + "sel": "Selkupisch", + "sga": "Altirisch (bis 900)", + "shn": "Schan", + "sid": "Sidamo", + "sin": "Singhalesisch", + "slk": "Slowakisch", + "slv": "Slowenisch", + "sma": "Sami; Süd", + "sme": "Nordsamisch", + "smj": "Samisch (Lule)", + "smn": "Samisch; Inari", + "smo": "Samoanisch", + "sms": "Samisch; Skolt", + "sna": "Schona", + "snd": "Sindhi", + "snk": "Soninke", + "sog": "Sogdisch", + "som": "Somali", + "sot": "Sotho (Süd)", + "spa": "Spanisch; Kastilianisch", + "sqi": "Albanisch", + "srd": "Sardisch", + "srn": "Sranan Tongo", + "srp": "Serbisch", + "srr": "Serer", + "ssw": "Swazi", + "suk": "Sukuma", + "sun": "Sundanesisch", + "sus": "Susu", + "sux": "Sumerisch", + "swa": "Swahili (Makrosprache)", + "swe": "Schwedisch", + "syc": "Syrisch; Klassisch", + "syr": "Syrisch", + "tah": "Tahitisch", + "tam": "Tamilisch", + "tat": "Tatarisch", + "tel": "Telugu", + "tem": "Temne", + "ter": "Tereno", + "tet": "Tetum", + "tgk": "Tadschikisch", + "tgl": "Tagalog", + "tha": "Thailändisch", + "tig": "Tigre", + "tir": "Tigrinja", + "tiv": "Tiv", + "tkl": "Tokelauanisch", + "tlh": "Klingonisch", + "tli": "Tlingit", + "tmh": "Tamaseq", + "tog": "Tonga (Nyasa)", + "ton": "Tonga (Tonga-Inseln)", + "tpi": "Neumelanesisch; Pidgin", + "tsi": "Tsimshian", + "tsn": "Tswana", + "tso": "Tsonga", + "tuk": "Turkmenisch", + "tum": "Tumbuka", + "tur": "Türkisch", + "tvl": "Elliceanisch", + "twi": "Twi", + "tyv": "Tuwinisch", + "udm": "Udmurt", + "uga": "Ugaritisch", + "uig": "Uigurisch", + "ukr": "Ukrainisch", + "umb": "Mbundu; Umbundu", + "und": "Unbestimmbar", + "urd": "Urdu", + "uzb": "Usbekisch", + "vai": "Vai", + "ven": "Venda", + "vie": "Vietnamesisch", + "vol": "Volapük", + "vot": "Wotisch", + "wal": "Wolaytta", + "war": "Waray (Philippinen)", + "was": "Washo", + "wln": "Wallonisch", + "wol": "Wolof", + "xal": "Kalmükisch", + "xho": "Xhosa", + "yao": "Yao", + "yap": "Yapesisch", + "yid": "Jiddisch", + "yor": "Joruba", + "zap": "Zapotekisch", + "zbl": "Bliss-Symbole", + "zen": "Zenaga", + "zha": "Zhuang", + "zho": "Chinesisch", + "zul": "Zulu", + "zun": "Zuni", + "zxx": "Kein sprachlicher Inhalt", + "zza": "Zaza" + }, + "el": { + "abk": "Αμπχαζιανά", + "ace": "Ατσενέζικα", + "ach": "Ακόλο", + "ada": "Αντάνγκμε", + "ady": "Αντύγκε", + "aar": "Αφάρ", + "afh": "Αφριχίλι", + "afr": "Αφρικάανς", + "ain": "Αϊνού (Ιαπωνία)", + "aka": "Ακάν", + "akk": "Akkadian", + "sqi": "Αλβανικά", + "ale": "Aleut", + "amh": "Amharic", + "anp": "Angika", + "ara": "Αραβικά", + "arg": "Αραγωνικά ", + "arp": "Αραπάχο", + "arw": "Arawak", + "hye": "Αρμένικα", + "asm": "Assamese", + "ast": "Asturian", + "ava": "Αβαρικά", + "ave": "Avestan", + "awa": "Awadhi", + "aym": "Aymara", + "aze": "Αζερμπαϊτζιανά", + "ban": "Balinese", + "bal": "Baluchi", + "bam": "Bambara", + "bas": "Basa (Cameroon)", + "bak": "Bashkir", + "eus": "Βασκικά", + "bej": "Beja", + "bel": "Λευκωρωσικά", + "bem": "Bemba (Zambia)", + "ben": "Μπενγκάλι", + "bho": "Bhojpuri", + "bik": "Bikol", + "byn": "Bilin", + "bin": "Bini", + "bis": "Bislama", + "zbl": "Blissymbols", + "bos": "Βοσνιακά", + "bra": "Braj", + "bre": "Βρετονικά", + "bug": "Buginese", + "bul": "Βουλγάρικα", + "bua": "Buriat", + "mya": "Burmese", + "cad": "Caddo", + "cat": "Καταλανικά", + "ceb": "Cebuano", + "chg": "Chagatai", + "cha": "Chamorro", + "che": "Τσετσενικά", + "chr": "Cherokee", + "chy": "Cheyenne", + "chb": "Chibcha", + "zho": "Κινέζικα", + "chn": "Chinook jargon", + "chp": "Chipewyan", + "cho": "Choctaw", + "chk": "Chuukese", + "chv": "Chuvash", + "cop": "Κοπτικά", + "cor": "Cornish", + "cos": "Κορσικανικά", + "cre": "Cree", + "mus": "Creek", + "hrv": "Κροατικά", + "ces": "Τσέχικα", + "dak": "Dakota", + "dan": "Δανέζικα", + "dar": "Dargwa", + "del": "Delaware", + "div": "Dhivehi", + "din": "Dinka", + "doi": "Dogri (macrolanguage)", + "dgr": "Dogrib", + "dua": "Duala", + "nld": "Ολλανδικά", + "dyu": "Dyula", + "dzo": "Dzongkha", + "efi": "Efik", + "egy": "Αρχαία Αιγυπτιακά", + "eka": "Ekajuk", + "elx": "Elamite", + "eng": "Αγγλικά", + "myv": "Erzya", + "epo": "Εσπεράντο", + "est": "Εσθονικά", + "ewe": "Ewe", + "ewo": "Ewondo", + "fan": "Fang (Equatorial Guinea)", + "fat": "Fanti", + "fao": "Faroese", + "fij": "Fijian", + "fil": "Filipino", + "fin": "Φινλανδικά", + "fon": "Fon", + "fra": "Γαλλικά", + "fur": "Friulian", + "ful": "Fulah", + "gaa": "Ga", + "glg": "Galician", + "lug": "Ganda", + "gay": "Gayo", + "gba": "Gbaya (Central African Republic)", + "gez": "Geez", + "kat": "Γεωργιανά", + "deu": "Γερμανικά", + "gil": "Gilbertese", "gon": "Gondi", "gor": "Gorontalo", - "got": "Gothisch", + "got": "Γοτθικά", "grb": "Grebo", - "grc": "Grieks; antiek (tot 1453)", "grn": "Guarani", - "gsw": "Duits; Zwitserland", "guj": "Gujarati", "gwi": "Gwichʼin", "hai": "Haida", - "hat": "Creools; Haïtiaans", "hau": "Hausa", - "haw": "Hawaiiaans", - "heb": "Hebreeuws", + "haw": "Hawaiian", + "heb": "Εβραϊκά", "her": "Herero", - "hil": "Hiligainoons", - "hin": "Hindi", - "hit": "Hittitisch", - "hmn": "Hmong", - "hmo": "Hiri Motu", - "hrv": "Kroatisch", - "hsb": "Servisch; hoger", - "hun": "Hongaars", - "hup": "Hupa", - "hye": "Armeens", - "iba": "Ibaans", - "ibo": "Igbo", - "ido": "Ido", - "iii": "Yi; Sichuan - Nuosu", - "iku": "Inuktitut", - "ile": "Interlingue", - "ilo": "Iloko", - "ina": "Interlingua (International Auxiliary Language Association)", - "ind": "Indonesisch", - "inh": "Ingoesjetisch", - "ipk": "Inupiak", - "isl": "IJslands", - "ita": "Italiaans", - "jav": "Javaans", - "jbo": "Lojbaans", - "jpn": "Japans", - "jpr": "Joods-Perzisch", - "jrb": "Joods-Arabisch", - "kaa": "Kara-Kalpak", - "kab": "Kabyle", - "kac": "Katsjin", - "kal": "Groenlands", - "kam": "Kamba (Kenya)", - "kan": "Kannada; Kanara; Kanarees", - "kas": "Kashmiri", - "kat": "Georgisch", - "kau": "Kanuri", - "kaw": "Kawi", - "kaz": "Kazachs", - "kbd": "Kabardisch; Tsjerkessisch", - "kha": "Khasi", - "khm": "Khmer, Cambodjaans", - "kho": "Khotanees", - "kik": "Kikuyu", - "kin": "Kinyarwanda", - "kir": "Kirgizisch", - "kmb": "Kimbundu", - "kok": "Konkani", - "kom": "Komi", - "kon": "Kikongo", - "kor": "Koreaans", - "kos": "Kosraeaans", - "kpe": "Kpelle", - "krc": "Karatsjay-Balkar", - "krl": "Karelisch", - "kru": "Kurukh", - "kua": "Kuanyama", - "kum": "Kumyk", - "kur": "Koerdisch", - "kut": "Kutenaïsch", - "lad": "Ladino", - "lah": "Lahnda", - "lam": "Lamba", - "lao": "Laotiaans", - "lat": "Latijn", - "lav": "Lets", - "lez": "Lezghiaans", - "lim": "Limburgs", - "lin": "Lingala", - "lit": "Litouws", - "lol": "Mongo", - "loz": "Lozi", - "ltz": "Luxemburgs", - "lua": "Luba-Lulua", - "lub": "Luba-Katanga", - "lug": "Luganda", - "lui": "Luiseno", - "lun": "Lunda", - "luo": "Luo (Kenia en Tanzania)", - "lus": "Lushai", - "mad": "Madurees", - "mag": "Magahisch", - "mah": "Marshallees", - "mai": "Maithili", - "mak": "Makasar", - "mal": "Malayalam", - "man": "Mandingo", - "mar": "Marathi", - "mas": "Masai", - "mdf": "Moksja", - "mdr": "Mandars", - "men": "Mende", - "mga": "Iers; middel (900-1200)", - "mic": "Mi'kmaq; Micmac", - "min": "Minangkabau", - "mis": "Niet-gecodeerde talen", - "mkd": "Macedonisch", - "mlg": "Malagassisch", - "mlt": "Maltees", - "mnc": "Manchu", - "mni": "Manipuri", - "moh": "Mohawk", - "mon": "Mongools", - "mos": "Mossisch", - "mri": "Maori", - "msa": "Maleis", - "mul": "Meerdere talen", - "mus": "Creek", - "mwl": "Mirandees", - "mwr": "Marwari", - "mya": "Burmees", - "myv": "Erzya", - "nap": "Napolitaans", - "nau": "Nauruaans", - "nav": "Navajo", - "nbl": "Ndebele; zuid", - "nde": "Ndebele; noord", - "ndo": "Ndonga", - "nds": "Duits; Laag", - "nep": "Nepalees", - "new": "Newari; Nepal", - "nia": "Nias", - "niu": "Niueaans", - "nld": "Nederlands", - "nno": "Noors; Nynorsk", - "nob": "Noors; Bokmål", - "nog": "Nogai", - "non": "Noors; oud", - "nor": "Noors", - "nqo": "N'Ko", - "nso": "Pedi; Sepedi; Noord-Sothotisch", - "nwc": "Newari; Klassiek Nepal", - "nya": "Nyanja", - "nym": "Nyamwezi", - "nyn": "Nyankools", - "nyo": "Nyoro", - "nzi": "Nzima", - "oci": "Occitaans (na 1500)", - "oji": "Ojibwa", - "ori": "Oriya", - "orm": "Oromo", - "osa": "Osaags", - "oss": "Ossetisch", - "ota": "Turks; ottomaans (1500-1928)", - "pag": "Pangasinaans", - "pal": "Pehlevi", - "pam": "Pampanga", - "pan": "Punjabi", - "pap": "Papiamento", - "pau": "Palauaans", - "peo": "Perzisch; oud (ca. 600-400 B.C.)", - "phn": "Foenisisch", - "pli": "Pali", - "pol": "Pools", - "pon": "Pohnpeiaans", - "por": "Portugees", - "pro": "Provençaals; oud (tot 1500)", - "pus": "Poesjto", - "que": "Quechua", - "raj": "Rajasthani", - "rap": "Rapanui", - "rar": "Rarotongan; Cookeilanden Maori", - "roh": "Reto-Romaans", - "rom": "Romani", - "ron": "Roemeens", - "run": "Rundi", - "rup": "Roemeens; Macedo-", - "rus": "Russisch", - "sad": "Sandawe", - "sag": "Sangho", - "sah": "Jakoets", - "sam": "Aramees; Samaritaans", - "san": "Sanskriet", - "sas": "Sasaaks", - "sat": "Santali", - "scn": "Siciliaans", - "sco": "Schots", - "sel": "Sulkoeps", - "sga": "Iers; oud (tot 900)", - "shn": "Sjaans", - "sid": "Sidamo", - "sin": "Sinhala", - "slk": "Slowaaks", - "slv": "Sloveens", - "sma": "Samisch; zuid, Laps; zuid", - "sme": "Samisch; noord, Laps; noord", - "smj": "Lule Sami", - "smn": "Sami; Inari, Laps; Inari", - "smo": "Samoaans", - "sms": "Sami; Skolt, Laps; Skolt", - "sna": "Shona", - "snd": "Sindhi", - "snk": "Soninke", - "sog": "Sogdiaans", - "som": "Somalisch", - "sot": "Sothaans; zuidelijk", - "spa": "Spaans", - "sqi": "Albanees", - "srd": "Sardinisch", - "srn": "Sranan Tongo", - "srp": "Servisch", - "srr": "Serer", - "ssw": "Swati", - "suk": "Sukuma", - "sun": "Soendanees; Sundanees", - "sus": "Susu", - "sux": "Sumerisch", - "swa": "Swahili", - "swe": "Zweeds", - "syc": "Syriac; Klassiek", - "syr": "Syrisch", - "tah": "Tahitisch", - "tam": "Tamil", - "tat": "Tataars", - "tel": "Telugu", - "tem": "Timne", - "ter": "Tereno", - "tet": "Tetum", - "tgk": "Tadzjieks", - "tgl": "Tagalog", - "tha": "Thai", - "tig": "Tigre", - "tir": "Tigrinya", - "tiv": "Tiv", - "tkl": "Tokelau", - "tlh": "Klingon; tlhIngan-Hol", - "tli": "Tlingit", - "tmh": "Tamasjek", - "tog": "Tonga (Nyasa)", - "ton": "Tonga (Tonga-eilanden)", - "tpi": "Tok Pisin", - "tsi": "Tsimsjiaans", - "tsn": "Tswana", - "tso": "Tsonga", - "tuk": "Turkmeens", - "tum": "Tumbuka", - "tur": "Turks", - "tvl": "Tuvalu", - "twi": "Twi", - "tyv": "Tuviniaans", - "udm": "Udmurts", - "uga": "Ugaritisch", - "uig": "Oeigoers; Oejgoers", - "ukr": "Oekraïens", - "umb": "Umbundu", - "und": "Onbepaald", - "urd": "Urdu", - "uzb": "Oezbeeks", - "vai": "Vai", - "ven": "Venda", - "vie": "Vietnamees", - "vol": "Volapük", - "vot": "Votisch", - "wal": "Walamo", - "war": "Waray (Filipijns)", - "was": "Wasjo", - "wln": "Waals", - "wol": "Wolof", - "xal": "Kalmyk", - "xho": "Xhosa", - "yao": "Yao", - "yap": "Yapees", - "yid": "Jiddisch", - "yor": "Yoruba", - "zap": "Zapotec", - "zbl": "Blissymbolen", - "zen": "Zenaga", - "zha": "Zhuang, Tsjoeang", - "zho": "Chinees", - "zul": "Zoeloe", - "zun": "Zuni", - "zxx": "Geen linguïstische inhoud", - "zza": "Zaza" - }, - "tr": { - "abk": "Abhazca", - "ace": "Achinese", - "ach": "Acoli", - "ada": "Adangme", - "ady": "Adyghe", - "aar": "Afar", - "afh": "Afrihili", - "afr": "Afrikanca", - "ain": "Ainu (Japonca)", - "aka": "Akanca (Afrika dili)", - "akk": "Akatça", - "sqi": "Albanian", - "ale": "Alaskaca", - "amh": "Etiyopyaca", - "anp": "Angika", - "ara": "Arapça", - "arg": "Aragonca (İspanya)", - "arp": "Arapaho (Kuzey Amerika yerlileri)", - "arw": "Arawak (Surinam)", - "hye": "Ermenice", - "asm": "Assamese (Hindistan)", - "ast": "Asturyasca", - "ava": "Avarca", - "ave": "Avestan (Eski İran)", - "awa": "Awadhi (Hindistan)", - "aym": "Aymara (Güney Amerika)", - "aze": "Azerice", - "ban": "Balice (Bali adaları)", - "bal": "Belucice (İran)", - "bam": "Bambara (Mali)", - "bas": "Basa (Kamerun)", - "bak": "Başkırca", - "eus": "Baskça", - "bej": "Beja (Eritre; Sudan)", - "bel": "Beyaz Rusça", - "bem": "Bemba (Zambia)", - "ben": "Bengalce", - "bho": "Bhojpuri (Hindistan)", - "bik": "Bikol (Filipinler)", - "byn": "Bilin", - "bin": "Bini (Afrika)", - "bis": "Bislama (Vanuatu; Kuzey Pasifik)", - "zbl": "Blis Sembolleri", - "bos": "Boşnakça", - "bra": "Braj (Hindistan)", - "bre": "Bretonca", - "bug": "Buginese (Endonezya)", - "bul": "Bulgarca", - "bua": "Buriat (Moğolistan)", - "mya": "Burmaca", - "cad": "Caddo (Kuzey Amerika yerlileri)", - "cat": "Katalanca", - "ceb": "Cebuano (Filipinler)", - "chg": "Çağatayca", - "cha": "Chamorro (Guam adaları)", - "che": "Çeçence", - "chr": "Cherokee (Kuzey Amerika yerlileri)", - "chy": "Cheyenne (kuzey Amerika yerlileri)", - "chb": "Chibcha (Kolombiya)", - "zho": "Çince", - "chn": "Chinook lehçesi (Kuzey Batı Amerika kıyıları)", - "chp": "Chipewyan (Kuzey Amerika yerlileri)", - "cho": "Choctaw (Kuzey Amerika yerlileri)", - "chk": "Chuukese", - "chv": "Çuvaş (Türkçe)", - "cop": "Kıptice (Eski Mısır)", - "cor": "Cornish (Kelt)", - "cos": "Korsikaca", - "cre": "Cree (Kuzey Amerika yerlileri)", - "mus": "Creek", - "hrv": "Hırvatça", - "ces": "Çekçe", - "dak": "Dakota (Kuzey Amerika yerlileri)", - "dan": "Danimarkaca; Danca", - "dar": "Dargwa (Dağıstan)", - "del": "Delaware (Kuzey Amerika yerlileri)", - "div": "Dhivehi", - "din": "Dinka (Sudan)", - "doi": "Dogri (makro dili)", - "dgr": "Dogrib (Kanada)", - "dua": "Duala (Afrika)", - "nld": "Flâmanca (Hollanda dili)", - "dyu": "Dyula (Burkina Faso; Mali)", - "dzo": "Dzongkha (Butan)", - "efi": "Efik (Afrika)", - "egy": "Mısırca (Eski)", - "eka": "Ekajuk (Afrika)", - "elx": "Elamca", - "eng": "İngilizce", - "myv": "Erzya dili", - "epo": "Esperanto", - "est": "Estonca", - "ewe": "Ewe (Afrika)", - "ewo": "Ewondo (Afrika)", - "fan": "Fang (Ekvatoryal Guinea)", - "fat": "Fanti (Afrika)", - "fao": "Faroece", - "fij": "Fiji dili", - "fil": "Filipince", - "fin": "Fince", - "fon": "Fon (Benin)", - "fra": "Fransızca", - "fur": "Friulian (İtalya)", - "ful": "Fulah (Afrika)", - "gaa": "Ganaca", - "glg": "Galce", - "lug": "Ganda Dili", - "gay": "Gayo (Sumatra)", - "gba": "Gbaya (Orta Afrika Cumhuriyeti)", - "gez": "Geez (Etiyopya)", - "kat": "Gürcüce", - "deu": "Almanca", - "gil": "Kiribati dili", - "gon": "Gondi (Hindistan)", - "gor": "Gorontalo (Endonezya)", - "got": "Gotik", - "grb": "Grebo (Liberya)", - "grn": "Guarani (Paraguay)", - "guj": "Gucaratça", - "gwi": "Gwichʼin", - "hai": "Haida (Kuzey Amerika yerlileri)", - "hau": "Hausa Dili", - "haw": "Havai Dili", - "heb": "İbranice", - "her": "Herero Dili", "hil": "Hiligaynon", - "hin": "Hintçe", + "hin": "Ινδικά", "hmo": "Hiri Motu", - "hit": "Hititçe", + "hit": "Hittite", "hmn": "Hmong", - "hun": "Macarca", + "hun": "Ουγγρικά", "hup": "Hupa", "iba": "Iban", - "isl": "İzlandaca", - "ido": "Ido Dili", - "ibo": "Igbo Dili", + "isl": "Ισλανδικά", + "ido": "Ido", + "ibo": "Igbo", "ilo": "Iloko", - "ind": "Endonezyaca", - "inh": "İnguşca", - "ina": "Interlingua (Uluslararası Yardımcı Dil Kurumu)", + "ind": "Ινδονησιακά", + "inh": "Ingush", + "ina": "Interlingua (International Auxiliary Language Association)", "ile": "Interlingue", "iku": "Inuktitut", - "ipk": "Inupiak Dili", - "gle": "İrlandaca", - "ita": "İtalyanca", - "jpn": "Japonca", - "jav": "Cava Dili", - "jrb": "Yahudi-Arapçası", - "jpr": "Yahudi-Farsça", + "ipk": "Inupiaq", + "gle": "Ιρλανδέζικα", + "ita": "Ιταλικά", + "jpn": "Γιαπωνέζικα", + "jav": "Javanese", + "jrb": "Judeo-Arabic", + "jpr": "Judeo-Persian", "kbd": "Kabardian", "kab": "Kabyle", "kac": "Kachin", @@ -1011,222 +1011,1062 @@ LANGUAGE_NAMES = { "xal": "Kalmyk", "kam": "Kamba (Kenya)", "kan": "Kannada", - "kau": "Kanuri Dili", + "kau": "Kanuri", "kaa": "Kara-Kalpak", "krc": "Karachay-Balkar", "krl": "Karelian", - "kas": "Keşmirce", - "csb": "Kashubian (Lehçe diyalekti)", + "kas": "Kashmiri", + "csb": "Kashubian", "kaw": "Kawi", - "kaz": "Kazakça", + "kaz": "Kazakh", "kha": "Khasi", "kho": "Khotanese", - "kik": "Kikuyu Dili", + "kik": "Kikuyu", "kmb": "Kimbundu", "kin": "Kinyarwanda", - "kir": "Kırgızca", - "tlh": "Klingon", - "kom": "Komi Dili", - "kon": "Kongo Dili", - "kok": "Konkani (makro dil)", - "kor": "Korece", + "kir": "Kirghiz", + "tlh": "Κλίγκον", + "kom": "Komi", + "kon": "Kongo", + "kok": "Konkani (macrolanguage)", + "kor": "Κορεάτικα", "kos": "Kosraean", "kpe": "Kpelle", - "kua": "Kuanyama Dili", + "kua": "Kuanyama", "kum": "Kumyk", - "kur": "Kürtçe", + "kur": "Κουρδικά", "kru": "Kurukh", "kut": "Kutenai", "lad": "Ladino", "lah": "Lahnda", "lam": "Lamba", - "lao": "Laos Dili", - "lat": "Latince", - "lav": "Letonca", + "lao": "Lao", + "lat": "Λατινικά", + "lav": "Λετονέζικα", "lez": "Lezghian", - "lim": "Liburg Dili", - "lin": "Lingala Dili", - "lit": "Litvanyaca", - "jbo": "Lojban dili", + "lim": "Limburgan", + "lin": "Lingala", + "lit": "Λιθουανικά", + "jbo": "Lojban", "loz": "Lozi", - "lub": "Luba Katanga Dili", + "lub": "Luba-Katanga", "lua": "Luba-Lulua", "lui": "Luiseno", "smj": "Lule Sami", "lun": "Lunda", - "luo": "Luo (Kenya ve Tanzanya)", + "luo": "Luo (Kenya and Tanzania)", "lus": "Lushai", - "ltz": "Lüksemburg Dili", - "mkd": "Makedonca", + "ltz": "Luxembourgish", + "mkd": "Σλαβομακεδονικά", "mad": "Madurese", "mag": "Magahi", - "mai": "Maithili dili", + "mai": "Maithili", "mak": "Makasar", - "mlg": "Madagaskar Dili", - "msa": "Malay (makro dili)", + "mlg": "Malagasy", + "msa": "Malay (macrolanguage)", "mal": "Malayalam", - "mlt": "Maltaca", + "mlt": "Maltese", "mnc": "Manchu", "mdr": "Mandar", "man": "Mandingo", - "mni": "Manipuri dili", - "glv": "Manx (Galler)", - "mri": "Maori Dili", + "mni": "Manipuri", + "glv": "Manx", + "mri": "Maori", "arn": "Mapudungun", "mar": "Marathi", - "chm": "Mari (Rusya)", - "mah": "Marshall Dili", + "chm": "Mari (Russia)", + "mah": "Marshallese", "mwr": "Marwari", "mas": "Masai", "men": "Mende (Sierra Leone)", - "mic": "Mi'kmak", + "mic": "Mi'kmaq", "min": "Minangkabau", "mwl": "Mirandese", "moh": "Mohawk", - "mdf": "Moşka", + "mdf": "Moksha", "lol": "Mongo", - "mon": "Moğol Dili", + "mon": "Μογγολικά", "mos": "Mossi", - "mul": "Çoklu diller", + "mul": "Πολλαπλές γλώσσες", "nqo": "N'Ko", "nau": "Nauru", - "nav": "Navajo Dili", - "ndo": "Ndonga Dili", + "nav": "Navajo", + "ndo": "Ndonga", "nap": "Neapolitan", "nia": "Nias", "niu": "Niuean", - "zxx": "Hiçbir dil içeriği yok", + "zxx": "Χωρίς γλωσσολογικό περιεχόμενο", "nog": "Nogai", - "nor": "Norveçce", - "nob": "Norveççe Bokmal", - "nno": "Norveççe Nynorsk", + "nor": "Νορβηγικά", + "nob": "Norwegian Bokmål", + "nno": "Νορβηγικά Nynorsk", "nym": "Nyamwezi", "nya": "Nyanja", "nyn": "Nyankole", "nyo": "Nyoro", "nzi": "Nzima", - "oci": "Oksitanca (1500 sonrası)", - "oji": "Ojibwa Dili", - "orm": "Oromo Dili", + "oci": "Occitan (post 1500)", + "oji": "Ojibwa", + "orm": "Oromo", "osa": "Osage", - "oss": "Osetya Dili", - "pal": "Pehlevi", + "oss": "Ossetian", + "pal": "Pahlavi", "pau": "Palauan", - "pli": "Pali Dili", + "pli": "Pali", "pam": "Pampanga", "pag": "Pangasinan", - "pan": "Pencabi Dili", + "pan": "Panjabi", "pap": "Papiamento", - "fas": "Farsça", - "phn": "Fenikçe", + "fas": "Περσικά", + "phn": "Φοινικικά", "pon": "Pohnpeian", - "pol": "Polonyaca", - "por": "Portekizce", - "pus": "Pushto", + "pol": "Πολωνέζικα", + "por": "Πορτογαλικά", + "pus": "Pashto", "que": "Quechua", "raj": "Rajasthani", "rap": "Rapanui", - "ron": "Rumence", - "roh": "Romanca", - "rom": "Çingene Dili", - "run": "Kirundi", - "rus": "Rusça", - "smo": "Samoa Dili", + "ron": "Ρουμάνικα", + "roh": "Romansh", + "rom": "Romany", + "run": "Rundi", + "rus": "Ρώσικα", + "smo": "Samoan", "sad": "Sandawe", - "sag": "Sangho", - "san": "Sanskritçe", - "sat": "Santali dili", - "srd": "Sardinya", + "sag": "Sango", + "san": "Σανσκριτικά", + "sat": "Santali", + "srd": "Sardinian", "sas": "Sasak", - "sco": "İskoç lehçesi", + "sco": "Σκωτσέζικα", "sel": "Selkup", - "srp": "Sırpça", + "srp": "Σερβικά", "srr": "Serer", "shn": "Shan", "sna": "Shona", - "scn": "Sicilyalı", + "scn": "Sicilian", "sid": "Sidamo", - "bla": "Siksika (Kuzey Amerika yerlileri)", + "bla": "Siksika", "snd": "Sindhi", - "sin": "Sinhala Dili", - "den": "Slave (Athapascan; Kuzey Amerika yerlileri)", - "slk": "Slovakça", - "slv": "Slovence", - "sog": "Sogdian", - "som": "Somali Dili", + "sin": "Sinhala", + "den": "Slave (Athapascan)", + "slk": "Σλοβακικά", + "slv": "Σλοβενικά", + "sog": "Σογδιανά", + "som": "Σομαλικά", "snk": "Soninke", - "spa": "İspanyolca", + "spa": "Ισπανικά", "srn": "Sranan Tongo", "suk": "Sukuma", - "sux": "Sümerce", - "sun": "Sudan Dili", + "sux": "Sumerian", + "sun": "Sundanese", "sus": "Susu", - "swa": "Swahili (makro dil)", - "ssw": "Siswati", - "swe": "İsveçce", - "syr": "Süryanice", + "swa": "Swahili (macrolanguage)", + "ssw": "Swati", + "swe": "Σουηδικά", + "syr": "Συριακά", "tgl": "Tagalog", - "tah": "Tahitice", - "tgk": "Tacikçe", + "tah": "Tahitian", + "tgk": "Tajik", "tmh": "Tamashek", - "tam": "Tamilce", - "tat": "Tatarca", + "tam": "Ταμίλ", + "tat": "Tatar", "tel": "Telugu", "ter": "Tereno", "tet": "Tetum", - "tha": "Taylandça", - "bod": "Tibetçe", + "tha": "Ταϊλανδέζικη", + "bod": "Θιβετιανά", "tig": "Tigre", - "tir": "Tigrinya Dili", + "tir": "Tigrinya", "tem": "Timne", "tiv": "Tiv", "tli": "Tlingit", "tpi": "Tok Pisin", - "tkl": "Tokelau", + "tkl": "Τοκελάου", "tog": "Tonga (Nyasa)", - "ton": "Tonga", + "ton": "Tonga (Tonga Islands)", "tsi": "Tsimshian", "tso": "Tsonga", - "tsn": "Setswana", + "tsn": "Tswana", "tum": "Tumbuka", - "tur": "Türkçe", - "tuk": "Türkmence", + "tur": "Τουρκικά", + "tuk": "Τουρκμένικα", "tvl": "Tuvalu", "tyv": "Tuvinian", "twi": "Twi", "udm": "Udmurt", - "uga": "Ugarit Çivi Yazısı", - "uig": "Uygurca", - "ukr": "Ukraynaca", + "uga": "Ugaritic", + "uig": "Uighur", + "ukr": "Ουκρανικά", "umb": "Umbundu", - "mis": "Şifresiz diller", - "und": "Belirlenemeyen", - "urd": "Urduca", - "uzb": "Özbekçe", + "mis": "Uncoded languages", + "und": "Απροσδιόριστη", + "urd": "Urdu", + "uzb": "Uzbek", "vai": "Vai", - "ven": "Venda Dili", - "vie": "Vietnamca", + "ven": "Venda", + "vie": "Βιετναμέζικα", "vol": "Volapük", "vot": "Votic", - "wln": "Valonca", - "war": "Waray (Filipinler)", - "was": "Vasho", - "cym": "Gal Dili", + "wln": "Walloon", + "war": "Waray (Philippines)", + "was": "Washo", + "cym": "Ουαλικά", "wal": "Wolaytta", "wol": "Wolof", "xho": "Xhosa", "sah": "Yakut", "yao": "Yao", "yap": "Yapese", - "yid": "Yidiş", + "yid": "Yiddish", "yor": "Yoruba", "zap": "Zapotec", "zza": "Zaza", "zen": "Zenaga", - "zha": "Zuang Dili", + "zha": "Zhuang", "zul": "Zulu", "zun": "Zuni" }, + "es": { + "aar": "Afar", + "abk": "Abjasio", + "ace": "Achenés", + "ach": "Acholi", + "ada": "Adangme", + "ady": "Adigué", + "afh": "Afrihili", + "afr": "Afrikáans", + "ain": "Ainu (Japón)", + "aka": "Acano", + "akk": "Acadio", + "ale": "Aleutiano", + "alt": "Altai meridional", + "amh": "Amhárico", + "ang": "Inglés antiguo (ca. 450-1100)", + "anp": "Angika", + "ara": "Árabe", + "arc": "Arameo oficial (700-300 A. C.)", + "arg": "Aragonés", + "arn": "Mapudungun", + "arp": "Arapajó", + "arw": "Arahuaco", + "asm": "Asamés", + "ast": "Asturiano", + "ava": "Ávaro", + "ave": "Avéstico", + "awa": "Awadhi", + "aym": "Aimara", + "aze": "Azerí", + "bak": "Bashkir", + "bal": "Baluchi", + "bam": "Bambara", + "ban": "Balinés", + "bas": "Basa (Camerún)", + "bej": "Beya", + "bel": "Bielorruso", + "bem": "Bemba (Zambia)", + "ben": "Bengalí", + "bho": "Bopurí", + "bik": "Bicolano", + "bin": "Bini", + "bis": "Bislama", + "bla": "Siksiká", + "bod": "Tibetano", + "bos": "Bosnio", + "bra": "Braj", + "bre": "Bretón", + "bua": "Buriato", + "bug": "Buginés", + "bul": "Búlgaro", + "byn": "Bilin", + "cad": "Caddo", + "car": "Caribe galibí", + "cat": "Catalán", + "ceb": "Cebuano", + "ces": "Checo", + "cha": "Chamorro", + "chb": "Muisca", + "che": "Checheno", + "chg": "Chagatai", + "chk": "Chuukés", + "chm": "Mari (Rusia)", + "chn": "Jerga chinook", + "cho": "Choctaw", + "chp": "Chipewyan", + "chr": "Cheroqui", + "chu": "Eslavo antiguo", + "chv": "Chuvasio", + "chy": "Cheyenne", + "cop": "Copto", + "cor": "Córnico", + "cos": "Corso", + "cre": "Cree", + "crh": "Tártaro de Crimea", + "csb": "Casubio", + "cym": "Galés", + "dak": "Dakota", + "dan": "Danés", + "dar": "Dargwa", + "del": "Delaware", + "den": "Slave (atabascano)", + "deu": "Alemán", + "dgr": "Dogrib", + "din": "Dinka", + "div": "Dhivehi", + "doi": "Dogri (macrolengua)", + "dsb": "Bajo sorabo", + "dua": "Duala", + "dum": "Neerlandés medio (ca. 1050-1350)", + "dyu": "Diula", + "dzo": "Dzongkha", + "efi": "Efik", + "egy": "Egipcio (antiguo)", + "eka": "Ekajuk", + "ell": "Griego moderno (1453-)", + "elx": "Elamita", + "eng": "Inglés", + "enm": "Inglés medio (1100-1500)", + "epo": "Esperanto", + "est": "Estonio", + "eus": "Vasco", + "ewe": "Ewe", + "ewo": "Ewondo", + "fan": "Fang (Guinea Ecuatorial)", + "fao": "Feroés", + "fas": "Persa", + "fat": "Fanti", + "fij": "Fiyiano", + "fil": "Filipino", + "fin": "Finés", + "fon": "Fon", + "fra": "Francés", + "frm": "Francés medio (ca. 1400-1600)", + "fro": "Francés antiguo (842-ca. 1400)", + "frr": "Frisón septentrional", + "frs": "Frisón oriental", + "fry": "Frisón occidental", + "ful": "Fula", + "fur": "Friulano", + "gaa": "Ga", + "gay": "Gayo", + "gba": "Gbaya (República Centroafricana)", + "gez": "Ge'ez", + "gil": "Gilbertés", + "gla": "Gaélico escocés", + "gle": "Irlandés", + "glg": "Gallego", + "glv": "Manés", + "gmh": "Alto alemán medio (ca. 1050-1500)", + "goh": "Alto alemán antiguo (ca. 750-1050)", + "gon": "Gondi", + "gor": "Gorontalo", + "got": "Gótico", + "grb": "Grebo", + "grc": "Griego antiguo (hasta 1453)", + "grn": "Guaraní", + "gsw": "Alemán suizo", + "guj": "Guyaratí", + "gwi": "Gwichʼin", + "hai": "Haida", + "hat": "Criollo haitiano", + "hau": "Hausa", + "haw": "Hawaiano", + "heb": "Hebreo", + "her": "Herero", + "hil": "Hiligainón", + "hin": "Hindi", + "hit": "Hitita", + "hmn": "Hmong", + "hmo": "Hiri motu", + "hrv": "Croata", + "hsb": "Alto sorabo", + "hun": "Húngaro", + "hup": "Hupa", + "hye": "Armenio", + "iba": "Iban", + "ibo": "Igbo", + "ido": "Ido", + "iii": "Yi de Sichuan", + "iku": "Inuktitut", + "ile": "Interlingüe", + "ilo": "Ilocano", + "ina": "Interlingua", + "ind": "Indonesio", + "inh": "Ingusetio", + "ipk": "Iñupiaq", + "isl": "Islandés", + "ita": "Italiano", + "jav": "Javanés", + "jbo": "Lojban", + "jpn": "Japonés", + "jpr": "Judeo-persa", + "jrb": "Judeo-árabe", + "kaa": "Karakalpako", + "kab": "Cabilio", + "kac": "Kachin", + "kal": "Kalaallisut", + "kam": "Kamba (Kenia)", + "kan": "Canarés", + "kas": "Cachemir", + "kat": "Georgiano", + "kau": "Kanuri", + "kaw": "Kawi", + "kaz": "Kazajo", + "kbd": "Cabardiano", + "kha": "Khasi", + "khm": "Jemer central", + "kho": "Khotanés", + "kik": "Kikuyu", + "kin": "Kinyarwanda", + "kir": "Kirguís", + "kmb": "Kimbundu", + "kok": "Konkaní (macrolengua)", + "kom": "Komi", + "kon": "Kikongo", + "kor": "Coreano", + "kos": "Kosraeano", + "kpe": "Kpelle", + "krc": "Karachayo-bálkaro", + "krl": "Carelio", + "kru": "Kurukh", + "kua": "Kuanyama", + "kum": "Cumuco", + "kur": "Curdo", + "kut": "Kutenai", + "lad": "Ladino", + "lah": "Lahnda", + "lam": "Lamba", + "lao": "Laosiano", + "lat": "Latín", + "lav": "Letón", + "lez": "Lezguino", + "lim": "Limburgan", + "lin": "Lingala", + "lit": "Lituano", + "lol": "Mongo", + "loz": "Lozi", + "ltz": "Luxemburgués", + "lua": "Chiluba", + "lub": "Kiluba", + "lug": "Luganda", + "lui": "Luiseño", + "lun": "Lunda", + "luo": "Luo (Kenia y Tanzania)", + "lus": "Mizo", + "mad": "Madurés", + "mag": "Magahi", + "mah": "Marshalés", + "mai": "Maithili", + "mak": "Macasar", + "mal": "Malayalam", + "man": "Mandingo", + "mar": "Maratí", + "mas": "Masai", + "mdf": "Moksha", + "mdr": "Mandar", + "men": "Mende (Sierra Leona)", + "mga": "Irlandés medio (900-1200)", + "mic": "Mi'kmaq", + "min": "Minangkabau", + "mis": "Idiomas sin codificar", + "mkd": "Macedonio", + "mlg": "Malgache", + "mlt": "Maltés", + "mnc": "Manchú", + "mni": "Meitei", + "moh": "Mohawk", + "mon": "Mongol", + "mos": "Mossi", + "mri": "Maorí", + "msa": "Malayo (macrolengua)", + "mul": "Idiomas múltiples", + "mus": "Creek", + "mwl": "Mirandés", + "mwr": "Marwari", + "mya": "Birmano", + "myv": "Erzya", + "nap": "Napolitano", + "nau": "Nauruano", + "nav": "Navajo", + "nbl": "Ndebele meridional", + "nde": "Ndebele septentrional", + "ndo": "Ndonga", + "nds": "Bajo alemán", + "nep": "Nepalí", + "new": "Bhasa; Nepalés", + "nia": "Nias", + "niu": "Niuano", + "nld": "Neerlandés", + "nno": "Noruego nynorsk", + "nob": "Noruego bokmål", + "nog": "Nogayo", + "non": "Noruego antiguo", + "nor": "Noruego", + "nqo": "N'ko", + "nso": "Sepedi", + "nwc": "Newarí antiguo", + "nya": "Nyanja", + "nym": "Nyamwezi", + "nyn": "Nyankole", + "nyo": "Nyoro", + "nzi": "Nzema", + "oci": "Occitano (posterior a 1500)", + "oji": "Ojibwa", + "ori": "Oriya", + "orm": "Oromo", + "osa": "Osage", + "oss": "Osetio", + "ota": "Turco otomano (1500-1928)", + "pag": "Pangasinense", + "pal": "Pahlavi", + "pam": "Pampango", + "pan": "Panyabí", + "pap": "Papiamento", + "pau": "Palauano", + "peo": "Persa antiguo (ca. 600-400 A. C.)", + "phn": "Fenicio", + "pli": "Pali", + "pol": "Polaco", + "pon": "Pohnpeiano", + "por": "Portugués", + "pro": "Provenzal antiguo (hasta 1500)", + "pus": "Pastún", + "que": "Quechua", + "raj": "Rajastaní", + "rap": "Rapanui", + "rar": "Maorí; Islas Cook", + "roh": "Romanche", + "rom": "Romaní", + "ron": "Rumano", + "run": "Kirundi", + "rup": "Rumano; Macedo-", + "rus": "Ruso", + "sad": "Sandavés", + "sag": "Sango", + "sah": "Yakuto", + "sam": "Arameo samaritano", + "san": "Sánscrito", + "sas": "Sasak", + "sat": "Santalí", + "scn": "Siciliano", + "sco": "Escocés (germánico)", + "sel": "Selkup", + "sga": "Irlandés antiguo (hasta 900)", + "shn": "Shan", + "sid": "Sidamo", + "sin": "Cingalés", + "slk": "Eslovaco", + "slv": "Esloveno", + "sma": "Sami meridional", + "sme": "Sami septentrional", + "smj": "Sami de Lule", + "smn": "Sami de Inari", + "smo": "Samoano", + "sms": "Sami de Skolt", + "sna": "Shona", + "snd": "Sindhi", + "snk": "Soninké", + "sog": "Sogdiano", + "som": "Somalí", + "sot": "Sesotho", + "spa": "Español", + "sqi": "Albanés", + "srd": "Sardo", + "srn": "Sranan tongo", + "srp": "Serbio", + "srr": "Serer", + "ssw": "Swazi", + "suk": "Sukuma", + "sun": "Sundanés", + "sus": "Susu", + "sux": "Sumerio", + "swa": "Suajili (macrolengua)", + "swe": "Sueco", + "syc": "Siríaco clásico", + "syr": "Siríaco", + "tah": "Tahitiano", + "tam": "Tamil", + "tat": "Tártaro", + "tel": "Telugú", + "tem": "Temné", + "ter": "Tereno", + "tet": "Tetun", + "tgk": "Tayiko", + "tgl": "Tagalo", + "tha": "Tailandés", + "tig": "Tigré", + "tir": "Tigriña", + "tiv": "Tiv", + "tkl": "Tokelauano", + "tlh": "Klingon", + "tli": "Tlingit", + "tmh": "Targuí", + "tog": "Tonga (Malaui)", + "ton": "Tonga (Islas Tonga)", + "tpi": "Tok pisin", + "tsi": "Tsimshian", + "tsn": "Setsuana", + "tso": "Tsonga", + "tuk": "Turcomano", + "tum": "Tumbuka", + "tur": "Turco", + "tvl": "Tuvaluano", + "twi": "Twi", + "tyv": "Tuvano", + "udm": "Udmurt", + "uga": "Ugarítico", + "uig": "Uiguro", + "ukr": "Ucraniano", + "umb": "Umbundu", + "und": "Indeterminado", + "urd": "Urdu", + "uzb": "Uzbeko", + "vai": "Vai", + "ven": "Venda", + "vie": "Vietnamita", + "vol": "Volapük", + "vot": "Vótico", + "wal": "Wolaytta", + "war": "Waray (Filipinas)", + "was": "Washo", + "wln": "Valón", + "wol": "Wólof", + "xal": "Calmuco", + "xho": "Xhosa", + "yao": "Yao", + "yap": "Yapés", + "yid": "Yidis", + "yor": "Yoruba", + "zap": "Zapoteco", + "zbl": "Símbolos de Bliss", + "zen": "Zenaga", + "zha": "Chuang", + "zho": "Chino", + "zul": "Zulú", + "zun": "Zuñi", + "zxx": "Sin contenido lingüístico", + "zza": "Zaza" + }, + "fi": { + "aar": "afar", + "abk": "abhaasi", + "ace": "aceh", + "ach": "atšoli", + "ada": "adangme", + "ady": "adyge", + "afh": "afrihili", + "afr": "afrikaans", + "ain": "Ainu (Japan)", + "aka": "Akan", + "akk": "akkadi", + "ale": "aleutti", + "alt": "Altai; Southern", + "amh": "amhara", + "ang": "English; Old (ca. 450-1100)", + "anp": "angika", + "ara": "arabia", + "arc": "Aramaic; Official (700-300 BCE)", + "arg": "aragonia", + "arn": "Mapudungun", + "arp": "arapaho", + "arw": "arawak", + "asm": "asami", + "ast": "asturia", + "ava": "avaari", + "ave": "avestan", + "awa": "awadhi", + "aym": "Aymara", + "aze": "azeri", + "bak": "baškiiri", + "bal": "belutši", + "bam": "Bambara", + "ban": "bali", + "bas": "Basa (Cameroon)", + "bej": "beja", + "bel": "valkovenäjä", + "bem": "Bemba (Zambia)", + "ben": "bengali", + "bho": "bhojpuri", + "bik": "bikol", + "bin": "bini", + "bis": "bislama", + "bla": "mustajalka (siksika)", + "bod": "tiibetti", + "bos": "bosnia", + "bra": "bradž", + "bre": "bretoni", + "bua": "burjaatti", + "bug": "bugi", + "bul": "bulgaria", + "byn": "Bilin", + "cad": "caddo", + "car": "Carib; Galibi", + "cat": "katalaani", + "ceb": "cebuano", + "ces": "tšekki", + "cha": "chamorro", + "chb": "chibcha", + "che": "tšetšeeni", + "chg": "Chagatai", + "chk": "chuuk", + "chm": "mari (Venäjä)", + "chn": "chinook-jargon", + "cho": "choctaw", + "chp": "chipewyan", + "chr": "cherokee", + "chu": "Slavonic; Old", + "chv": "tšuvassi", + "chy": "cheyenne", + "cop": "kopti", + "cor": "korni", + "cos": "korsika", + "cre": "cree", + "crh": "krimintataari", + "csb": "kašubi", + "cym": "kymri", + "dak": "dakota", + "dan": "tanska", + "dar": "dargva", + "del": "delaware", + "den": "athapaski-slavi", + "deu": "saksa", + "dgr": "dogrib", + "din": "Dinka", + "div": "Dhivehi", + "doi": "Dogri (macrolanguage)", + "dsb": "alasorbi", + "dua": "duala", + "dum": "Dutch; Middle (ca. 1050-1350)", + "dyu": "dyula", + "dzo": "dzongkha", + "efi": "efik", + "egy": "muinaisegypti", + "eka": "ekajuk", + "ell": "nykykreikka", + "elx": "elami", + "eng": "englanti", + "enm": "keskienglanti", + "epo": "esperanto", + "est": "viro", + "eus": "baski", + "ewe": "ewe", + "ewo": "ewondo", + "fan": "Fang (Equatorial Guinea)", + "fao": "fääri", + "fas": "persia", + "fat": "fanti", + "fij": "fidži", + "fil": "filipino", + "fin": "suomi", + "fon": "fon", + "fra": "ranska", + "frm": "French; Middle (ca. 1400-1600)", + "fro": "French; Old (842-ca. 1400)", + "frr": "Frisian; Northern", + "frs": "Frisian; Eastern", + "fry": "Frisian; Western", + "ful": "fulani", + "fur": "friuli", + "gaa": "gã", + "gay": "gayo", + "gba": "Gbaya (Central African Republic)", + "gez": "ge'ez", + "gil": "kiribati", + "gla": "Gaelic; Scottish", + "gle": "iiri", + "glg": "galicia", + "glv": "manksi", + "gmh": "German; Middle High (ca. 1050-1500)", + "goh": "German; Old High (ca. 750-1050)", + "gon": "gondi", + "gor": "gorontalo", + "got": "gootti", + "grb": "grebo", + "grc": "muinaiskreikka", + "grn": "guarani", + "gsw": "German; Swiss", + "guj": "gujarati", + "gwi": "Gwichʼin", + "hai": "haida", + "hat": "Creole; Haitian", + "hau": "hausa", + "haw": "havaiji", + "heb": "heprea", + "her": "herero", + "hil": "hiligaynon", + "hin": "hindi", + "hit": "heetti", + "hmn": "hmong", + "hmo": "hiri-motu", + "hrv": "kroatia", + "hsb": "Sorbian; Upper", + "hun": "unkari", + "hup": "hupa", + "hye": "armenia", + "iba": "Iban", + "ibo": "igbo", + "ido": "ido", + "iii": "Yi; Sichuan", + "iku": "inuktitut", + "ile": "interlingue", + "ilo": "iloko", + "ina": "interlingua", + "ind": "indonesia", + "inh": "inguuši", + "ipk": "iñupiaq", + "isl": "islanti", + "ita": "italia", + "jav": "jaava", + "jbo": "lojban", + "jpn": "japani", + "jpr": "juutalaispersia", + "jrb": "juutalaisarabia", + "kaa": "karakalpakki", + "kab": "kabyyli", + "kac": "kachin", + "kal": "grönlanti", + "kam": "Kamba (Kenya)", + "kan": "kannada", + "kas": "kashmiri", + "kat": "georgia", + "kau": "kanuri", + "kaw": "kavi", + "kaz": "kazakki", + "kbd": "kabardi", + "kha": "Khasi", + "khm": "Khmer; Central", + "kho": "khotani", + "kik": "kikuyu", + "kin": "ruanda", + "kir": "kirgiisi", + "kmb": "kimbundu", + "kok": "Konkani (macrolanguage)", + "kom": "komi", + "kon": "Kongo", + "kor": "korea", + "kos": "kosrae", + "kpe": "kpelle", + "krc": "karatšai-balkaari", + "krl": "karjala", + "kru": "kurukh", + "kua": "kuanjama", + "kum": "kumykki", + "kur": "kurdi", + "kut": "kutenai", + "lad": "ladino", + "lah": "lahnda", + "lam": "lamba", + "lao": "lao", + "lat": "latina", + "lav": "latvia", + "lez": "lezgi", + "lim": "Limburgan", + "lin": "lingala", + "lit": "liettua", + "lol": "mongo", + "loz": "lozi", + "ltz": "Luxemburg", + "lua": "luba (Lulua)", + "lub": "luba (Katanga)", + "lug": "Ganda", + "lui": "luiseño", + "lun": "lunda", + "luo": "luo", + "lus": "lushai", + "mad": "madura", + "mag": "magahi", + "mah": "Marshallese", + "mai": "maithili", + "mak": "Makasar", + "mal": "malayalam", + "man": "mandingo", + "mar": "marathi", + "mas": "masai", + "mdf": "mokša", + "mdr": "mandar", + "men": "Mende (Sierra Leone)", + "mga": "keski-iiri", + "mic": "Mi'kmaq", + "min": "minangkabau", + "mis": "Uncoded languages", + "mkd": "makedonia", + "mlg": "malagassi", + "mlt": "malta", + "mnc": "mantšu", + "mni": "manipuri", + "moh": "mohawk", + "mon": "Mongolian", + "mos": "mossi", + "mri": "maori", + "msa": "Malay (macrolanguage)", + "mul": "monia kieliä", + "mus": "muskogi", + "mwl": "mirandês", + "mwr": "marwari", + "mya": "burma", + "myv": "ersä", + "nap": "napoli", + "nau": "nauru", + "nav": "Navajo", + "nbl": "ndebele; eteländebele", + "nde": "ndebele; pohjoisndebele", + "ndo": "ndonga", + "nds": "German; Low", + "nep": "nepali", + "new": "Bhasa; Nepal", + "nia": "nias", + "niu": "niue", + "nld": "hollanti", + "nno": "norja (uusnorja)", + "nob": "Norwegian Bokmål", + "nog": "nogai", + "non": "muinaisnorja", + "nor": "norja", + "nqo": "N'Ko", + "nso": "Sotho; Northern", + "nwc": "Newari; Old", + "nya": "Nyanja", + "nym": "nyamwezi", + "nyn": "nyankole", + "nyo": "Nyoro", + "nzi": "nzima", + "oci": "Occitan (post 1500)", + "oji": "Ojibwa", + "ori": "oriya", + "orm": "oromo", + "osa": "osage", + "oss": "Ossetian", + "ota": "osmaninturkki", + "pag": "pangasinan", + "pal": "pahlavi", + "pam": "pampanga", + "pan": "Panjabi", + "pap": "papiamentu", + "pau": "palau", + "peo": "Persian; Old (ca. 600-400 B.C.)", + "phn": "foinikia", + "pli": "Pali", + "pol": "puola", + "pon": "pohnpei", + "por": "portugali", + "pro": "muinaisprovensaali", + "pus": "pašto", + "que": "Quechua", + "raj": "rajasthani", + "rap": "rapanui", + "rar": "Maori; Cook Islands", + "roh": "Romansh", + "rom": "romani", + "ron": "romania", + "run": "rundi", + "rup": "Romanian; Macedo-", + "rus": "venäjä", + "sad": "sandawe", + "sag": "Sango", + "sah": "jakuutti", + "sam": "Aramaic; Samaritan", + "san": "sanskrit", + "sas": "Sasak", + "sat": "santali", + "scn": "sisilia", + "sco": "skotti", + "sel": "selkuppi", + "sga": "muinaisiiri", + "shn": "shan", + "sid": "sidamo", + "sin": "Sinhala", + "slk": "slovakki", + "slv": "sloveeni", + "sma": "eteläsaame", + "sme": "pohjoissaame", + "smj": "luulajansaame", + "smn": "inarinsaame", + "smo": "samoa", + "sms": "koltansaami", + "sna": "shona", + "snd": "sindhi", + "snk": "soninke", + "sog": "sogdi", + "som": "somali", + "sot": "eteläsotho", + "spa": "espanja", + "sqi": "albania", + "srd": "sardi", + "srn": "Sranan Tongo", + "srp": "serbia", + "srr": "Serer", + "ssw": "swazi", + "suk": "sukuma", + "sun": "sunda", + "sus": "susu", + "sux": "sumeri", + "swa": "Swahili (macrolanguage)", + "swe": "ruotsi", + "syc": "Syriac; Classical", + "syr": "syyria", + "tah": "tahiti", + "tam": "tamili", + "tat": "tataari", + "tel": "Telugu", + "tem": "temne", + "ter": "tereno", + "tet": "tetum", + "tgk": "tadžikki", + "tgl": "tagalog", + "tha": "thai", + "tig": "tigre", + "tir": "tigrinya", + "tiv": "tiv", + "tkl": "tokelau", + "tlh": "Klingon", + "tli": "tlinglit", + "tmh": "Tamashek", + "tog": "Malawin tonga", + "ton": "Tongan tonga", + "tpi": "tok-pisin", + "tsi": "tsimshian", + "tsn": "tswana", + "tso": "tsonga", + "tuk": "turkmeeni", + "tum": "tumbuka", + "tur": "turkki", + "tvl": "tuvalu", + "twi": "twi", + "tyv": "tuva", + "udm": "udmurtti", + "uga": "ugarit", + "uig": "uiguuri", + "ukr": "ukraina", + "umb": "umbundu", + "und": "määrittämätön", + "urd": "urdu", + "uzb": "uzbekki", + "vai": "vai", + "ven": "venda", + "vie": "vietnam", + "vol": "volapük", + "vot": "vatja", + "wal": "Wolaytta", + "war": "Waray (Philippines)", + "was": "washo", + "wln": "valloni", + "wol": "wolof", + "xal": "Kalmyk", + "xho": "xhosa", + "yao": "mien", + "yap": "yap", + "yid": "jiddiš", + "yor": "yoruba", + "zap": "Zapotec", + "zbl": "Blissymbols", + "zen": "zenaga", + "zha": "Zhuang", + "zho": "kiina", + "zul": "zulu", + "zun": "zuni", + "zxx": "No linguistic content", + "zza": "Zaza" + }, "fr": { "aar": "afar", "abk": "abkhaze", @@ -2067,1266 +2907,6 @@ LANGUAGE_NAMES = { "zxx": "No linguistic content", "zza": "Zaza" }, - "ru": { - "aar": "Афар", - "abk": "Абхазский", - "ace": "Ачехский", - "ach": "Ачоли", - "ada": "Адангме", - "ady": "Адыгейский", - "afh": "Африхили", - "afr": "Африкаанс", - "ain": "Ainu (Japan)", - "aka": "Акан", - "akk": "Аккадский", - "ale": "Алеутский", - "alt": "Altai; Southern", - "amh": "Амхарский (Амаринья)", - "ang": "English; Old (ca. 450-1100)", - "anp": "Анжика", - "ara": "Арабский", - "arc": "Арамейский; Официальный", - "arg": "Арагонский", - "arn": "Mapudungun", - "arp": "Арапахо", - "arw": "Аравакский", - "asm": "Ассамский", - "ast": "Астурийский", - "ava": "Аварский", - "ave": "Авестийский", - "awa": "Авадхи", - "aym": "Аймара", - "aze": "Азербайджанский", - "bak": "Башкирский", - "bal": "Baluchi", - "bam": "Бамбара", - "ban": "Балийский", - "bas": "Баса (Камерун)", - "bej": "Беджа", - "bel": "Белорусский", - "bem": "Бемба (Замбия)", - "ben": "Бенгальский", - "bho": "Бходжпури", - "bik": "Бикольский", - "bin": "Бини", - "bis": "Бислама", - "bla": "Сиксика", - "bod": "Тибетский", - "bos": "Боснийский", - "bra": "Браун", - "bre": "Бретонский", - "bua": "Бурятский", - "bug": "Бугийский", - "bul": "Болгарский", - "byn": "Bilin", - "cad": "Каддо", - "car": "Carib; Galibi", - "cat": "Каталанский", - "ceb": "Себуано", - "ces": "Чешский", - "cha": "Чаморро", - "chb": "Чибча", - "che": "Чеченский", - "chg": "Чагатайский", - "chk": "Трукский", - "chm": "Марийский (Россия)", - "chn": "Чинук жаргон", - "cho": "Чоктав", - "chp": "Чипевианский", - "chr": "Чероки", - "chu": "Slavonic; Old", - "chv": "Чувашский", - "chy": "Чейенн", - "cop": "Коптский", - "cor": "Корнский", - "cos": "Корсиканский", - "cre": "Кри", - "crh": "Turkish; Crimean", - "csb": "Кашубианский", - "cym": "Уэльский (Валлийский)", - "dak": "Дакота", - "dan": "Датский", - "dar": "Даргва", - "del": "Делаварский", - "den": "Атапачские языки", - "deu": "Немецкий", - "dgr": "Догриб", - "din": "Динка", - "div": "Dhivehi", - "doi": "Dogri (macrolanguage)", - "dsb": "Sorbian; Lower", - "dua": "Дуала", - "dum": "Dutch; Middle (ca. 1050-1350)", - "dyu": "Диула (Дьюла)", - "dzo": "Дзонг-кэ", - "efi": "Эфик", - "egy": "Древнеегипетский", - "eka": "Экаджук", - "ell": "Новогреческий (с 1453)", - "elx": "Эламский", - "eng": "Английский", - "enm": "Среднеанглийский (1100-1500)", - "epo": "Эсперанто", - "est": "Эстонский", - "eus": "Баскский", - "ewe": "Эве", - "ewo": "Эвондо", - "fan": "Fang (Equatorial Guinea)", - "fao": "Фарерский", - "fas": "Персидский", - "fat": "Фанти", - "fij": "Фиджийский", - "fil": "Filipino", - "fin": "Финский", - "fon": "Фон", - "fra": "Французский", - "frm": "French; Middle (ca. 1400-1600)", - "fro": "French; Old (842-ca. 1400)", - "frr": "Frisian; Northern", - "frs": "Frisian; Eastern", - "fry": "Frisian; Western", - "ful": "Фулах", - "fur": "Фриулианский", - "gaa": "Га", - "gay": "Гайо", - "gba": "Gbaya (Central African Republic)", - "gez": "Геэз", - "gil": "Гильбертский", - "gla": "Gaelic; Scottish", - "gle": "Ирландский", - "glg": "Galician", - "glv": "Мэнкский", - "gmh": "German; Middle High (ca. 1050-1500)", - "goh": "German; Old High (ca. 750-1050)", - "gon": "Гонди", - "gor": "Горонтало", - "got": "Готский", - "grb": "Гребо", - "grc": "Древнегреческий (по 1453)", - "grn": "Гуарани", - "gsw": "German; Swiss", - "guj": "Гуджарати", - "gwi": "Gwichʼin", - "hai": "Хайда", - "hat": "Creole; Haitian", - "hau": "Хауса", - "haw": "Гавайский", - "heb": "Иврит", - "her": "Гереро", - "hil": "Хилигайнон", - "hin": "Хинди", - "hit": "Хиттит", - "hmn": "Хмонг", - "hmo": "Хири Моту", - "hrv": "Хорватский", - "hsb": "Sorbian; Upper", - "hun": "Венгерский", - "hup": "Хупа", - "hye": "Армянский", - "iba": "Ибанский", - "ibo": "Игбо", - "ido": "Идо", - "iii": "Yi; Sichuan", - "iku": "Инуктитут", - "ile": "Интерлингве", - "ilo": "Илоко", - "ina": "Интерлингва (Ассоциация международного вспомогательного языка)", - "ind": "Индонезийский", - "inh": "Ингушский", - "ipk": "Инулиак", - "isl": "Исландский", - "ita": "Итальянский", - "jav": "Яванский", - "jbo": "Лоджбан", - "jpn": "Японский", - "jpr": "Еврейско-персидский", - "jrb": "Еврейско-арабский", - "kaa": "Каракалпакский", - "kab": "Кабильский", - "kac": "Качинский", - "kal": "Kalaallisut", - "kam": "Kamba (Kenya)", - "kan": "Каннада", - "kas": "Кашмири", - "kat": "Грузинский", - "kau": "Канури", - "kaw": "Кави", - "kaz": "Казахский", - "kbd": "Кабардинский", - "kha": "Кхаси", - "khm": "Khmer; Central", - "kho": "Хотанский", - "kik": "Кикуйю", - "kin": "Киньяруанда", - "kir": "Киргизский", - "kmb": "Кимбунду", - "kok": "Konkani (macrolanguage)", - "kom": "Коми", - "kon": "Конго", - "kor": "Корейский", - "kos": "Косраинский", - "kpe": "Кпелле", - "krc": "Карачаево-балкарский", - "krl": "Карельский", - "kru": "Курух", - "kua": "Киньяма", - "kum": "Кумыкский", - "kur": "Курдский", - "kut": "Кутенаи", - "lad": "Ладино", - "lah": "Лахнда", - "lam": "Ламба", - "lao": "Лаосский", - "lat": "Латинский", - "lav": "Латвийский", - "lez": "Лезгинский", - "lim": "Limburgan", - "lin": "Лингала", - "lit": "Литовский", - "lol": "Монго", - "loz": "Лози", - "ltz": "Luxembourgish", - "lua": "Луба-Лулуа", - "lub": "Луба-Катанга", - "lug": "Ганда", - "lui": "Луисеньо", - "lun": "Лунда", - "luo": "Луо (Кения и Танзания)", - "lus": "Лушай", - "mad": "Мадурский", - "mag": "Магахи", - "mah": "Marshallese", - "mai": "Майтхили", - "mak": "Макассарский", - "mal": "Малаялам", - "man": "Мандинго", - "mar": "Маратхи", - "mas": "Масаи", - "mdf": "Мокшанский", - "mdr": "Мандарский", - "men": "Mende (Sierra Leone)", - "mga": "Среднеирландский (900-1200)", - "mic": "Mi'kmaq", - "min": "Минангкабау", - "mis": "Uncoded languages", - "mkd": "Македонский", - "mlg": "Малагаси", - "mlt": "Мальтийский", - "mnc": "Манчу", - "mni": "Манипури", - "moh": "Мохаук", - "mon": "Монгольский", - "mos": "Моей", - "mri": "Маори", - "msa": "Malay (macrolanguage)", - "mul": "Разных семей языки", - "mus": "Крик", - "mwl": "Мирандские", - "mwr": "Марвари", - "mya": "Бирманский", - "myv": "Эрзянский", - "nap": "Неаполитанский", - "nau": "Науру", - "nav": "Navajo", - "nbl": "Ндебеле южный", - "nde": "Ндебеле северный", - "ndo": "Ндунга", - "nds": "German; Low", - "nep": "Непальский", - "new": "Bhasa; Nepal", - "nia": "Ниас", - "niu": "Ниуэ", - "nld": "Нидерландский", - "nno": "Норвежский Нюнорск", - "nob": "Norwegian Bokmål", - "nog": "Ногайский", - "non": "Старонорвежский", - "nor": "Норвежский", - "nqo": "Н'ко", - "nso": "Sotho; Northern", - "nwc": "Newari; Old", - "nya": "Nyanja", - "nym": "Ньямвези", - "nyn": "Ньянколе", - "nyo": "Ньоро", - "nzi": "Нзима", - "oci": "Occitan (post 1500)", - "oji": "Оджибва", - "ori": "Ория", - "orm": "Оромо", - "osa": "Оседжи", - "oss": "Ossetian", - "ota": "Турецкий; Отомангский (1500-1928)", - "pag": "Пангасинан", - "pal": "Пехлевийский", - "pam": "Пампанга", - "pan": "Panjabi", - "pap": "Папьяменто", - "pau": "Палау", - "peo": "Persian; Old (ca. 600-400 B.C.)", - "phn": "Финикийский", - "pli": "Пали", - "pol": "Польский", - "pon": "Фонпейский", - "por": "Португальский", - "pro": "Старопровансальский (по 1500)", - "pus": "Пушту", - "que": "Кечуа", - "raj": "Раджастхани", - "rap": "Рапаню", - "rar": "Maori; Cook Islands", - "roh": "Romansh", - "rom": "Цыганский", - "ron": "Румынский", - "run": "Рунди", - "rup": "Romanian; Macedo-", - "rus": "Русский", - "sad": "Сандаве", - "sag": "Санго", - "sah": "Якутский", - "sam": "Aramaic; Samaritan", - "san": "Санскрит", - "sas": "Сасакский", - "sat": "Сантали", - "scn": "Сицилийский", - "sco": "Шотландский", - "sel": "Селкапский", - "sga": "Староирландский (по 900)", - "shn": "Шанский", - "sid": "Сидама", - "sin": "Сингальский", - "slk": "Словацкий", - "slv": "Словенский", - "sma": "Sami; Southern", - "sme": "Sami; Northern", - "smj": "Люле-саамский", - "smn": "Sami; Inari", - "smo": "Самоанский", - "sms": "Sami; Skolt", - "sna": "Шона", - "snd": "Синдхи", - "snk": "Сонинк", - "sog": "Согдийский", - "som": "Сомали", - "sot": "Сото Южный", - "spa": "Испанский", - "sqi": "Албанский", - "srd": "Сардинский", - "srn": "Sranan Tongo", - "srp": "Сербский", - "srr": "Серер", - "ssw": "Свати", - "suk": "Сукума", - "sun": "Сунданский", - "sus": "Сусу", - "sux": "Шумерский", - "swa": "Swahili (macrolanguage)", - "swe": "Шведский", - "syc": "Syriac; Classical", - "syr": "Сирийский", - "tah": "Таитянский", - "tam": "Тамильский", - "tat": "Татарский", - "tel": "Телугу", - "tem": "Темне", - "ter": "Терено", - "tet": "Тетумский", - "tgk": "Таджикский", - "tgl": "Тагалог", - "tha": "Таи", - "tig": "Тигре", - "tir": "Тигринья", - "tiv": "Тив", - "tkl": "Токелау", - "tlh": "Klingon", - "tli": "Тлингит", - "tmh": "Тамашек", - "tog": "Тонга (Ньяса)", - "ton": "Тонга (острова Тонга)", - "tpi": "Ток Писин", - "tsi": "Цимшиан", - "tsn": "Тсвана", - "tso": "Тсонга", - "tuk": "Туркменский", - "tum": "Тумбука", - "tur": "Турецкий", - "tvl": "Тувалу", - "twi": "Тви", - "tyv": "Тувинский", - "udm": "Удмуртский", - "uga": "Угаритский", - "uig": "Уйгурский", - "ukr": "Украинский", - "umb": "Умбунду", - "und": "Неидентифицированный", - "urd": "Урду", - "uzb": "Узбекский", - "vai": "Ваи", - "ven": "Венда", - "vie": "Вьетнамский", - "vol": "Волапюк", - "vot": "Вотик", - "wal": "Wolaytta", - "war": "Waray (Philippines)", - "was": "Вашо", - "wln": "Валлун", - "wol": "Волоф", - "xal": "Kalmyk", - "xho": "Коса", - "yao": "Яо", - "yap": "Яапийский", - "yid": "Идиш", - "yor": "Йоруба", - "zap": "Сапотекский", - "zbl": "Blissymbols", - "zen": "Зенагский", - "zha": "Чжуанский", - "zho": "Китайский", - "zul": "Зулусский", - "zun": "Зуньи", - "zxx": "Нет языкового содержимого", - "zza": "Зазаки" - }, - "zh_Hans_CN": { - "aar": "阿法尔语", - "abk": "阿布哈兹语", - "ace": "亚齐语", - "ach": "阿乔利语", - "ada": "阿当梅语", - "ady": "阿迪格语", - "afh": "阿弗里希利语", - "afr": "南非荷兰语", - "ain": "阿伊努语(日本)", - "aka": "阿坎语", - "akk": "阿卡德语", - "ale": "阿留申语", - "alt": "阿尔泰语(南)", - "amh": "阿姆哈拉语", - "ang": "英语(上古,约 450-1100)", - "anp": "安吉卡语", - "ara": "阿拉伯语", - "arc": "阿拉米语(官方,公元前 700-300)", - "arg": "阿拉贡语", - "arn": "阿劳坎语", - "arp": "阿拉帕霍语", - "arw": "阿拉瓦克语", - "asm": "阿萨姆语", - "ast": "阿斯图里亚斯语", - "ava": "阿瓦尔语", - "ave": "阿维斯陀语", - "awa": "阿瓦德语", - "aym": "艾马拉语", - "aze": "阿塞拜疆语", - "bak": "巴什基尔语", - "bal": "俾路支语", - "bam": "班巴拉语", - "ban": "巴厘语", - "bas": "巴萨语(喀麦隆)", - "bej": "贝扎语", - "bel": "白俄罗斯语", - "bem": "本巴语(赞比亚)", - "ben": "孟加拉语", - "bho": "博杰普尔语", - "bik": "比科尔语", - "bin": "比尼语", - "bis": "比斯拉马语", - "bla": "西克西卡语", - "bod": "藏语", - "bos": "波斯尼亚语", - "bra": "布拉吉语", - "bre": "布列塔尼语", - "bua": "布里亚特语", - "bug": "布吉语", - "bul": "保加利亚语", - "byn": "比林语", - "cad": "卡多语", - "car": "加勒比语", - "cat": "加泰罗尼亚语", - "ceb": "宿务语", - "ces": "捷克语", - "cha": "查莫罗语", - "chb": "奇布查语", - "che": "车臣语", - "chg": "察合台语", - "chk": "丘克语", - "chm": "马里语(俄罗斯)", - "chn": "奇努克混合语", - "cho": "乔克托语", - "chp": "奇佩维安语", - "chr": "切罗基语", - "chu": "斯拉夫语(古教会)", - "chv": "楚瓦什语", - "chy": "夏延语", - "cop": "科普特语", - "cor": "康沃尔语", - "cos": "科西嘉语", - "cre": "克里语", - "crh": "鞑靼语(克里米亚)", - "csb": "卡舒比语", - "cym": "威尔士语", - "dak": "达科他语", - "dan": "丹麦语", - "dar": "达尔格瓦语", - "del": "特拉华语", - "den": "史拉维语(阿沙巴斯甘)", - "deu": "德语", - "dgr": "多格里布语", - "din": "丁卡语", - "div": "迪维希语", - "doi": "多格拉语", - "dsb": "索布语(下)", - "dua": "杜亚拉语", - "dum": "荷兰语(中古,约 1050-1350)", - "dyu": "迪尤拉语", - "dzo": "宗喀语", - "efi": "埃菲克语", - "egy": "埃及语(古)", - "eka": "埃克丘克语", - "ell": "希腊语(现代,1453-)", - "elx": "埃兰语", - "eng": "英语", - "enm": "英语(中古,1100-1500)", - "epo": "世界语", - "est": "爱沙尼亚语", - "eus": "巴斯克语", - "ewe": "埃维语", - "ewo": "埃翁多语", - "fan": "芳语(赤道几内亚)", - "fao": "法罗语", - "fas": "波斯语", - "fat": "芳蒂语", - "fij": "斐济语", - "fil": "菲律宾语", - "fin": "芬兰语", - "fon": "丰语", - "fra": "法语", - "frm": "法语(中古,约 1400-1600)", - "fro": "法语(上古,842-约 1400)", - "frr": "弗里西语(北)", - "frs": "弗里西亚语(东)", - "fry": "弗里西亚语(西)", - "ful": "富拉语", - "fur": "弗留利语", - "gaa": "加语", - "gay": "卡约语", - "gba": "巴亚语(中非共和国)", - "gez": "吉兹语", - "gil": "吉尔伯特语", - "gla": "盖尔语(苏格兰)", - "gle": "爱尔兰语", - "glg": "加利西亚语", - "glv": "马恩岛语", - "gmh": "德语(中古高地,约 1050-1500)", - "goh": "德语(上古高地,约 750-1050)", - "gon": "贡德语", - "gor": "哥伦打洛语", - "got": "哥特语", - "grb": "格列博语", - "grc": "希腊语(古典,直到 1453)", - "grn": "瓜拉尼语", - "gsw": "德语(瑞士)", - "guj": "古吉拉特语", - "gwi": "库臣语", - "hai": "海达语", - "hat": "克里奥尔语(海地)", - "hau": "豪萨语", - "haw": "夏威夷语", - "heb": "希伯来语", - "her": "赫雷罗语", - "hil": "希利盖农语", - "hin": "印地语", - "hit": "赫梯语", - "hmn": "苗语", - "hmo": "希里莫图语", - "hrv": "克罗地亚语", - "hsb": "索布语(上)", - "hun": "匈牙利语", - "hup": "胡帕语", - "hye": "亚美尼亚语", - "iba": "伊班语", - "ibo": "伊博语", - "ido": "伊多语", - "iii": "彝语(四川)", - "iku": "伊努伊特语", - "ile": "国际语(西方)", - "ilo": "伊洛卡诺语", - "ina": "国际语", - "ind": "印尼语", - "inh": "印古什语", - "ipk": "依努庇克语", - "isl": "冰岛语", - "ita": "意大利语", - "jav": "爪哇语", - "jbo": "逻辑语", - "jpn": "日语", - "jpr": "犹太-波斯语", - "jrb": "犹太-阿拉伯语", - "kaa": "卡拉卡尔帕克语", - "kab": "卡布列语", - "kac": "景颇语", - "kal": "格陵兰语", - "kam": "坎巴语(肯尼亚)", - "kan": "卡纳达语", - "kas": "克什米尔语", - "kat": "格鲁吉亚语", - "kau": "卡努里语", - "kaw": "卡威语", - "kaz": "哈萨克语", - "kbd": "卡巴尔达语", - "kha": "卡西语", - "khm": "高棉语", - "kho": "和田语", - "kik": "基库尤语", - "kin": "基尼阿万达语", - "kir": "吉尔吉斯语", - "kmb": "金本杜语", - "kok": "孔卡尼语", - "kom": "科米语", - "kon": "刚果语", - "kor": "朝鲜语", - "kos": "科斯拉伊语", - "kpe": "克佩勒语", - "krc": "卡拉恰伊-巴尔卡尔语", - "krl": "卡累利阿语", - "kru": "库卢克语", - "kua": "宽亚玛语", - "kum": "库梅克语", - "kur": "库尔德语", - "kut": "库特内语", - "lad": "拉迪诺语", - "lah": "拉亨达语", - "lam": "兰巴语", - "lao": "老挝语", - "lat": "拉丁语", - "lav": "拉脱维亚语", - "lez": "列兹金语", - "lim": "林堡语", - "lin": "林加拉语", - "lit": "立陶宛语", - "lol": "芒戈语", - "loz": "洛齐语", - "ltz": "卢森堡语", - "lua": "卢巴-卢拉语", - "lub": "卢巴-加丹加语", - "lug": "干达语", - "lui": "卢伊塞诺语", - "lun": "隆达语", - "luo": "卢奥语(肯尼亚和坦桑尼亚)", - "lus": "卢萨语", - "mad": "马都拉语", - "mag": "摩揭陀语", - "mah": "马绍尔语", - "mai": "米德勒语", - "mak": "望加锡语", - "mal": "马拉雅拉姆语", - "man": "曼丁哥语", - "mar": "马拉地语", - "mas": "马萨伊语", - "mdf": "莫克沙语", - "mdr": "曼达语", - "men": "门德语(塞拉利昂)", - "mga": "爱尔兰语(中古,900-1200)", - "mic": "米克马克语", - "min": "米南卡保语", - "mis": "未被编码的语言", - "mkd": "马其顿语", - "mlg": "马达加斯加语", - "mlt": "马耳他语", - "mnc": "满语", - "mni": "曼尼普尔语", - "moh": "莫霍克语", - "mon": "蒙古语", - "mos": "莫西语", - "mri": "毛利语", - "msa": "马来语族", - "mul": "多种语言", - "mus": "克里克语", - "mwl": "米兰德斯语", - "mwr": "马尔瓦利语", - "mya": "缅甸语", - "myv": "厄尔兹亚语", - "nap": "拿坡里语", - "nau": "瑙鲁语", - "nav": "纳瓦霍语", - "nbl": "恩德贝勒语(南)", - "nde": "恩德贝勒语(北)", - "ndo": "恩敦加语", - "nds": "撒克逊语(低地)", - "nep": "尼泊尔语", - "new": "尼瓦尔语", - "nia": "尼亚斯语", - "niu": "纽埃语", - "nld": "荷兰语", - "nno": "新挪威语", - "nob": "挪威布克莫尔语", - "nog": "诺盖语", - "non": "诺尔斯语(古)", - "nor": "挪威语", - "nqo": "西非书面语言字母", - "nso": "索托语(北)", - "nwc": "尼瓦尔语(古典)", - "nya": "尼扬贾语", - "nym": "尼扬韦齐语", - "nyn": "尼扬科勒语", - "nyo": "尼奥罗语", - "nzi": "恩济马语", - "oci": "奥克西唐语(1500 后)", - "oji": "奥吉布瓦语", - "ori": "奥利亚语", - "orm": "奥罗莫语", - "osa": "奥萨格语", - "oss": "奥塞梯语", - "ota": "土耳其语(奥斯曼,1500-1928)", - "pag": "邦阿西楠语", - "pal": "钵罗钵语", - "pam": "邦板牙语", - "pan": "旁遮普语", - "pap": "帕皮亚门托语", - "pau": "帕劳语", - "peo": "波斯语(古,公元前约 600-400)", - "phn": "腓尼基语", - "pli": "巴利语", - "pol": "波兰语", - "pon": "波纳佩语", - "por": "葡萄牙语", - "pro": "普罗旺斯语(古,至 1500)", - "pus": "普什图语", - "que": "克丘亚语", - "raj": "拉贾斯坦语", - "rap": "拉帕努伊语", - "rar": "拉罗汤加语", - "roh": "罗曼什语", - "rom": "罗姆语", - "ron": "罗马尼亚语", - "run": "基隆迪语", - "rup": "阿罗马尼亚语", - "rus": "俄语", - "sad": "桑达韦语", - "sag": "桑戈语", - "sah": "雅库特语", - "sam": "阿拉米语(萨马利亚)", - "san": "梵语", - "sas": "萨萨克语", - "sat": "桑塔利语", - "scn": "西西里语", - "sco": "苏格兰语", - "sel": "塞尔库普语", - "sga": "爱尔兰语(古,至 900)", - "shn": "掸语", - "sid": "锡达莫语", - "sin": "僧加罗语", - "slk": "斯洛伐克语", - "slv": "斯洛文尼亚语", - "sma": "萨米语(南)", - "sme": "萨米语(北)", - "smj": "律勒欧-萨米语", - "smn": "伊纳里-萨米语", - "smo": "萨摩亚语", - "sms": "斯科特-萨米语", - "sna": "修纳语", - "snd": "信德语", - "snk": "索宁克语", - "sog": "粟特语", - "som": "索马里语", - "sot": "索托语(南)", - "spa": "西班牙语", - "sqi": "阿尔巴尼亚语", - "srd": "撒丁语", - "srn": "苏里南汤加语", - "srp": "塞尔维亚语", - "srr": "塞雷尔语", - "ssw": "斯瓦特语", - "suk": "苏库马语", - "sun": "巽他语", - "sus": "苏苏语", - "sux": "苏美尔语", - "swa": "斯瓦希里语族", - "swe": "瑞典语", - "syc": "叙利亚语(古典)", - "syr": "古叙利亚语", - "tah": "塔希提语", - "tam": "泰米尔语", - "tat": "塔塔尔语", - "tel": "泰卢固语", - "tem": "滕内语", - "ter": "特列纳语", - "tet": "特塔姆语", - "tgk": "塔吉克语", - "tgl": "塔加洛语", - "tha": "泰语", - "tig": "提格雷语", - "tir": "提格里尼亚语", - "tiv": "蒂夫语", - "tkl": "托克劳语", - "tlh": "克林贡语", - "tli": "特林吉特语", - "tmh": "塔马舍克语", - "tog": "汤加语 (尼亚萨)", - "ton": "汤加语(汤加岛)", - "tpi": "托克皮辛语", - "tsi": "钦西安语", - "tsn": "茨瓦纳语", - "tso": "聪加语", - "tuk": "土库曼语", - "tum": "奇图姆布卡语", - "tur": "土耳其语", - "tvl": "图瓦卢语", - "twi": "契维语", - "tyv": "图瓦语", - "udm": "乌德穆尔特语", - "uga": "乌加里特语", - "uig": "维吾尔语", - "ukr": "乌克兰语", - "umb": "翁本杜语", - "und": "未确定的语言", - "urd": "乌尔都语", - "uzb": "乌兹别克语", - "vai": "瓦伊语", - "ven": "文达语", - "vie": "越南语", - "vol": "沃拉普克语", - "vot": "沃提克语", - "wal": "瓦拉莫语", - "war": "瓦赖语(菲律宾)", - "was": "瓦肖语", - "wln": "瓦龙语", - "wol": "沃洛夫语", - "xal": "卡尔梅克语", - "xho": "科萨语", - "yao": "瑶语", - "yap": "雅浦语", - "yid": "依地语", - "yor": "约鲁巴语", - "zap": "萨波特克语", - "zbl": "布利斯符号", - "zen": "哲纳加语", - "zha": "壮语", - "zho": "中文", - "zul": "祖鲁语", - "zun": "祖尼语", - "zxx": "No linguistic content", - "zza": "扎扎其语" - }, - "fi": { - "aar": "afar", - "abk": "abhaasi", - "ace": "aceh", - "ach": "atšoli", - "ada": "adangme", - "ady": "adyge", - "afh": "afrihili", - "afr": "afrikaans", - "ain": "Ainu (Japan)", - "aka": "Akan", - "akk": "akkadi", - "ale": "aleutti", - "alt": "Altai; Southern", - "amh": "amhara", - "ang": "English; Old (ca. 450-1100)", - "anp": "angika", - "ara": "arabia", - "arc": "Aramaic; Official (700-300 BCE)", - "arg": "aragonia", - "arn": "Mapudungun", - "arp": "arapaho", - "arw": "arawak", - "asm": "asami", - "ast": "asturia", - "ava": "avaari", - "ave": "avestan", - "awa": "awadhi", - "aym": "Aymara", - "aze": "azeri", - "bak": "baškiiri", - "bal": "belutši", - "bam": "Bambara", - "ban": "bali", - "bas": "Basa (Cameroon)", - "bej": "beja", - "bel": "valkovenäjä", - "bem": "Bemba (Zambia)", - "ben": "bengali", - "bho": "bhojpuri", - "bik": "bikol", - "bin": "bini", - "bis": "bislama", - "bla": "mustajalka (siksika)", - "bod": "tiibetti", - "bos": "bosnia", - "bra": "bradž", - "bre": "bretoni", - "bua": "burjaatti", - "bug": "bugi", - "bul": "bulgaria", - "byn": "Bilin", - "cad": "caddo", - "car": "Carib; Galibi", - "cat": "katalaani", - "ceb": "cebuano", - "ces": "tšekki", - "cha": "chamorro", - "chb": "chibcha", - "che": "tšetšeeni", - "chg": "Chagatai", - "chk": "chuuk", - "chm": "mari (Venäjä)", - "chn": "chinook-jargon", - "cho": "choctaw", - "chp": "chipewyan", - "chr": "cherokee", - "chu": "Slavonic; Old", - "chv": "tšuvassi", - "chy": "cheyenne", - "cop": "kopti", - "cor": "korni", - "cos": "korsika", - "cre": "cree", - "crh": "krimintataari", - "csb": "kašubi", - "cym": "kymri", - "dak": "dakota", - "dan": "tanska", - "dar": "dargva", - "del": "delaware", - "den": "athapaski-slavi", - "deu": "saksa", - "dgr": "dogrib", - "din": "Dinka", - "div": "Dhivehi", - "doi": "Dogri (macrolanguage)", - "dsb": "alasorbi", - "dua": "duala", - "dum": "Dutch; Middle (ca. 1050-1350)", - "dyu": "dyula", - "dzo": "dzongkha", - "efi": "efik", - "egy": "muinaisegypti", - "eka": "ekajuk", - "ell": "nykykreikka", - "elx": "elami", - "eng": "englanti", - "enm": "keskienglanti", - "epo": "esperanto", - "est": "viro", - "eus": "baski", - "ewe": "ewe", - "ewo": "ewondo", - "fan": "Fang (Equatorial Guinea)", - "fao": "fääri", - "fas": "persia", - "fat": "fanti", - "fij": "fidži", - "fil": "filipino", - "fin": "suomi", - "fon": "fon", - "fra": "ranska", - "frm": "French; Middle (ca. 1400-1600)", - "fro": "French; Old (842-ca. 1400)", - "frr": "Frisian; Northern", - "frs": "Frisian; Eastern", - "fry": "Frisian; Western", - "ful": "fulani", - "fur": "friuli", - "gaa": "gã", - "gay": "gayo", - "gba": "Gbaya (Central African Republic)", - "gez": "ge'ez", - "gil": "kiribati", - "gla": "Gaelic; Scottish", - "gle": "iiri", - "glg": "galicia", - "glv": "manksi", - "gmh": "German; Middle High (ca. 1050-1500)", - "goh": "German; Old High (ca. 750-1050)", - "gon": "gondi", - "gor": "gorontalo", - "got": "gootti", - "grb": "grebo", - "grc": "muinaiskreikka", - "grn": "guarani", - "gsw": "German; Swiss", - "guj": "gujarati", - "gwi": "Gwichʼin", - "hai": "haida", - "hat": "Creole; Haitian", - "hau": "hausa", - "haw": "havaiji", - "heb": "heprea", - "her": "herero", - "hil": "hiligaynon", - "hin": "hindi", - "hit": "heetti", - "hmn": "hmong", - "hmo": "hiri-motu", - "hrv": "kroatia", - "hsb": "Sorbian; Upper", - "hun": "unkari", - "hup": "hupa", - "hye": "armenia", - "iba": "Iban", - "ibo": "igbo", - "ido": "ido", - "iii": "Yi; Sichuan", - "iku": "inuktitut", - "ile": "interlingue", - "ilo": "iloko", - "ina": "interlingua", - "ind": "indonesia", - "inh": "inguuši", - "ipk": "iñupiaq", - "isl": "islanti", - "ita": "italia", - "jav": "jaava", - "jbo": "lojban", - "jpn": "japani", - "jpr": "juutalaispersia", - "jrb": "juutalaisarabia", - "kaa": "karakalpakki", - "kab": "kabyyli", - "kac": "kachin", - "kal": "grönlanti", - "kam": "Kamba (Kenya)", - "kan": "kannada", - "kas": "kashmiri", - "kat": "georgia", - "kau": "kanuri", - "kaw": "kavi", - "kaz": "kazakki", - "kbd": "kabardi", - "kha": "Khasi", - "khm": "Khmer; Central", - "kho": "khotani", - "kik": "kikuyu", - "kin": "ruanda", - "kir": "kirgiisi", - "kmb": "kimbundu", - "kok": "Konkani (macrolanguage)", - "kom": "komi", - "kon": "Kongo", - "kor": "korea", - "kos": "kosrae", - "kpe": "kpelle", - "krc": "karatšai-balkaari", - "krl": "karjala", - "kru": "kurukh", - "kua": "kuanjama", - "kum": "kumykki", - "kur": "kurdi", - "kut": "kutenai", - "lad": "ladino", - "lah": "lahnda", - "lam": "lamba", - "lao": "lao", - "lat": "latina", - "lav": "latvia", - "lez": "lezgi", - "lim": "Limburgan", - "lin": "lingala", - "lit": "liettua", - "lol": "mongo", - "loz": "lozi", - "ltz": "Luxemburg", - "lua": "luba (Lulua)", - "lub": "luba (Katanga)", - "lug": "Ganda", - "lui": "luiseño", - "lun": "lunda", - "luo": "luo", - "lus": "lushai", - "mad": "madura", - "mag": "magahi", - "mah": "Marshallese", - "mai": "maithili", - "mak": "Makasar", - "mal": "malayalam", - "man": "mandingo", - "mar": "marathi", - "mas": "masai", - "mdf": "mokša", - "mdr": "mandar", - "men": "Mende (Sierra Leone)", - "mga": "keski-iiri", - "mic": "Mi'kmaq", - "min": "minangkabau", - "mis": "Uncoded languages", - "mkd": "makedonia", - "mlg": "malagassi", - "mlt": "malta", - "mnc": "mantšu", - "mni": "manipuri", - "moh": "mohawk", - "mon": "Mongolian", - "mos": "mossi", - "mri": "maori", - "msa": "Malay (macrolanguage)", - "mul": "monia kieliä", - "mus": "muskogi", - "mwl": "mirandês", - "mwr": "marwari", - "mya": "burma", - "myv": "ersä", - "nap": "napoli", - "nau": "nauru", - "nav": "Navajo", - "nbl": "ndebele; eteländebele", - "nde": "ndebele; pohjoisndebele", - "ndo": "ndonga", - "nds": "German; Low", - "nep": "nepali", - "new": "Bhasa; Nepal", - "nia": "nias", - "niu": "niue", - "nld": "hollanti", - "nno": "norja (uusnorja)", - "nob": "Norwegian Bokmål", - "nog": "nogai", - "non": "muinaisnorja", - "nor": "norja", - "nqo": "N'Ko", - "nso": "Sotho; Northern", - "nwc": "Newari; Old", - "nya": "Nyanja", - "nym": "nyamwezi", - "nyn": "nyankole", - "nyo": "Nyoro", - "nzi": "nzima", - "oci": "Occitan (post 1500)", - "oji": "Ojibwa", - "ori": "oriya", - "orm": "oromo", - "osa": "osage", - "oss": "Ossetian", - "ota": "osmaninturkki", - "pag": "pangasinan", - "pal": "pahlavi", - "pam": "pampanga", - "pan": "Panjabi", - "pap": "papiamentu", - "pau": "palau", - "peo": "Persian; Old (ca. 600-400 B.C.)", - "phn": "foinikia", - "pli": "Pali", - "pol": "puola", - "pon": "pohnpei", - "por": "portugali", - "pro": "muinaisprovensaali", - "pus": "pašto", - "que": "Quechua", - "raj": "rajasthani", - "rap": "rapanui", - "rar": "Maori; Cook Islands", - "roh": "Romansh", - "rom": "romani", - "ron": "romania", - "run": "rundi", - "rup": "Romanian; Macedo-", - "rus": "venäjä", - "sad": "sandawe", - "sag": "Sango", - "sah": "jakuutti", - "sam": "Aramaic; Samaritan", - "san": "sanskrit", - "sas": "Sasak", - "sat": "santali", - "scn": "sisilia", - "sco": "skotti", - "sel": "selkuppi", - "sga": "muinaisiiri", - "shn": "shan", - "sid": "sidamo", - "sin": "Sinhala", - "slk": "slovakki", - "slv": "sloveeni", - "sma": "eteläsaame", - "sme": "pohjoissaame", - "smj": "luulajansaame", - "smn": "inarinsaame", - "smo": "samoa", - "sms": "koltansaami", - "sna": "shona", - "snd": "sindhi", - "snk": "soninke", - "sog": "sogdi", - "som": "somali", - "sot": "eteläsotho", - "spa": "espanja", - "sqi": "albania", - "srd": "sardi", - "srn": "Sranan Tongo", - "srp": "serbia", - "srr": "Serer", - "ssw": "swazi", - "suk": "sukuma", - "sun": "sunda", - "sus": "susu", - "sux": "sumeri", - "swa": "Swahili (macrolanguage)", - "swe": "ruotsi", - "syc": "Syriac; Classical", - "syr": "syyria", - "tah": "tahiti", - "tam": "tamili", - "tat": "tataari", - "tel": "Telugu", - "tem": "temne", - "ter": "tereno", - "tet": "tetum", - "tgk": "tadžikki", - "tgl": "tagalog", - "tha": "thai", - "tig": "tigre", - "tir": "tigrinya", - "tiv": "tiv", - "tkl": "tokelau", - "tlh": "Klingon", - "tli": "tlinglit", - "tmh": "Tamashek", - "tog": "Malawin tonga", - "ton": "Tongan tonga", - "tpi": "tok-pisin", - "tsi": "tsimshian", - "tsn": "tswana", - "tso": "tsonga", - "tuk": "turkmeeni", - "tum": "tumbuka", - "tur": "turkki", - "tvl": "tuvalu", - "twi": "twi", - "tyv": "tuva", - "udm": "udmurtti", - "uga": "ugarit", - "uig": "uiguuri", - "ukr": "ukraina", - "umb": "umbundu", - "und": "määrittämätön", - "urd": "urdu", - "uzb": "uzbekki", - "vai": "vai", - "ven": "venda", - "vie": "vietnam", - "vol": "volapük", - "vot": "vatja", - "wal": "Wolaytta", - "war": "Waray (Philippines)", - "was": "washo", - "wln": "valloni", - "wol": "wolof", - "xal": "Kalmyk", - "xho": "xhosa", - "yao": "mien", - "yap": "yap", - "yid": "jiddiš", - "yor": "yoruba", - "zap": "Zapotec", - "zbl": "Blissymbols", - "zen": "zenaga", - "zha": "Zhuang", - "zho": "kiina", - "zul": "zulu", - "zun": "zuni", - "zxx": "No linguistic content", - "zza": "Zaza" - }, "it": { "aar": "Afar", "abk": "Abkhazian", @@ -3747,846 +3327,6 @@ LANGUAGE_NAMES = { "zxx": "Nessun contenuto linguistico", "zza": "Zaza" }, - "uk": { - "aar": "афар", - "abk": "абхазька", - "ace": "ачеська", - "ach": "ачолі", - "ada": "адангме", - "ady": "адигейська", - "afh": "афрингілі", - "afr": "африкаанс", - "ain": "айнська (Японія)", - "aka": "акан", - "akk": "аккадська", - "ale": "алеутська", - "alt": "алтайська (південна)", - "amh": "амхарська", - "ang": "давньоанглійська (бл. 450-1100)", - "anp": "ангіка", - "ara": "арабська", - "arc": "арамейська (офіційна; 700-300 до нашої ери)", - "arg": "арагонська", - "arn": "арауканська", - "arp": "арапахо", - "arw": "аравакська", - "asm": "ассамська", - "ast": "астурійська", - "ava": "аварська", - "ave": "авестанська", - "awa": "авадхі", - "aym": "аймарська", - "aze": "азербайджанська", - "bak": "башкирська", - "bal": "белуджійська", - "bam": "бамбара", - "ban": "балійська", - "bas": "баса (Камерун)", - "bej": "бежа", - "bel": "білоруська", - "bem": "бемба (Замбія)", - "ben": "бенгальська", - "bho": "бходжпурі", - "bik": "бікольська", - "bin": "біні", - "bis": "біслама", - "bla": "сісіка", - "bod": "тибетська", - "bos": "боснійська", - "bra": "брай", - "bre": "бретонська", - "bua": "бурятська", - "bug": "бугійська", - "bul": "болгарська", - "byn": "білін", - "cad": "каддо", - "car": "карибська (галібі)", - "cat": "каталонська", - "ceb": "себуано", - "ces": "чеська", - "cha": "чаморо", - "chb": "чибча", - "che": "чеченська", - "chg": "чагатайська", - "chk": "чуукська", - "chm": "марійська (Росія)", - "chn": "чинук; жаргон", - "cho": "чоктау", - "chp": "чипев’ян", - "chr": "черокі", - "chu": "давньослов’янська", - "chv": "чуваська", - "chy": "шаєнн", - "cop": "коптська", - "cor": "корнійська", - "cos": "корсиканська", - "cre": "крі", - "crh": "турецька (кримська)", - "csb": "кашубська", - "cym": "валійська", - "dak": "дакота", - "dan": "данська", - "dar": "даргва", - "del": "делаварська", - "den": "слейві (атабаська)", - "deu": "німецька", - "dgr": "догріб", - "din": "дінка", - "div": "мальдивська", - "doi": "догрі (макромова)", - "dsb": "нижньолужицька", - "dua": "дуала", - "dum": "середньовічна голландська (бл. 1050-1350)", - "dyu": "діула", - "dzo": "дзонг-ке", - "efi": "ефік", - "egy": "давньоєгипетська", - "eka": "екаджук", - "ell": "грецька (з 1453)", - "elx": "еламська", - "eng": "англійська", - "enm": "середньоанглійська (1100-1500)", - "epo": "есперанто", - "est": "естонська", - "eus": "баскська", - "ewe": "еве", - "ewo": "евондо", - "fan": "фанг (Екваторіальна Гвінея)", - "fao": "фарерська", - "fas": "перська", - "fat": "фанті", - "fij": "фіджійська", - "fil": "філіппінська", - "fin": "фінська", - "fon": "фон", - "fra": "французька", - "frm": "середньофранцузька (бл. 1400-1600)", - "fro": "давньофранцузька (842-бл. 1400)", - "frr": "фризька (північна)", - "frs": "фризька (східна)", - "fry": "фризька (західна)", - "ful": "фулах", - "fur": "фріульська", - "gaa": "га", - "gay": "гайо", - "gba": "гбая (Центральноафриканська Республіка)", - "gez": "гііз", - "gil": "гільбертська", - "gla": "гаельська (Шотландія)", - "gle": "ірландська", - "glg": "галісійська", - "glv": "манкс", - "gmh": "середньоверхньонімецька (бл. 1050-1500)", - "goh": "давньосередньонімецька (бл. 750-1050)", - "gon": "гонді", - "gor": "горонтало", - "got": "готська", - "grb": "гребо", - "grc": "давньогрецька (до 1453)", - "grn": "гуарані", - "gsw": "німецька (Швейцарія)", - "guj": "гуджараті", - "gwi": "гвічин", - "hai": "хайда", - "hat": "креольська (гаїтянська)", - "hau": "хауса", - "haw": "гавайська", - "heb": "іврит", - "her": "гереро", - "hil": "хілігайнон", - "hin": "хінді", - "hit": "хетська", - "hmn": "хмонг", - "hmo": "хірімоту", - "hrv": "хорватська", - "hsb": "верхньолужицька", - "hun": "угорська", - "hup": "хупа", - "hye": "вірменська", - "iba": "ібанська", - "ibo": "ігбо", - "ido": "ідо", - "iii": "ї (Сичуань)", - "iku": "інуктітут", - "ile": "окциденталь", - "ilo": "ілоко", - "ina": "інтерлінгва (Асоціація міжнародної допоміжної мови)", - "ind": "індонезійська", - "inh": "інгушська", - "ipk": "інупіак", - "isl": "ісландська", - "ita": "італійська", - "jav": "яванська", - "jbo": "ложбан", - "jpn": "японська", - "jpr": "єврейсько-перська", - "jrb": "єврейсько-арабська", - "kaa": "каракалпацька", - "kab": "кабильська", - "kac": "качин", - "kal": "калаалісут", - "kam": "камба (Кенія)", - "kan": "каннада", - "kas": "кашмірська", - "kat": "грузинська", - "kau": "канурі", - "kaw": "каві", - "kaz": "казахська", - "kbd": "кабардінська", - "kha": "кхасі", - "khm": "кхмерська (центральна)", - "kho": "хотаносакська", - "kik": "кікуйю", - "kin": "кіньяруанда", - "kir": "киргизька", - "kmb": "кімбунду", - "kok": "конкані (макромова)", - "kom": "комі", - "kon": "конго", - "kor": "корейська", - "kos": "косрейська", - "kpe": "кпелле", - "krc": "карачаєво-балкарська", - "krl": "карельська", - "kru": "курух", - "kua": "куаньяма", - "kum": "кумикська", - "kur": "курдська", - "kut": "кутенай", - "lad": "ладіно", - "lah": "лахнда", - "lam": "ламба", - "lao": "лаоська", - "lat": "латинська", - "lav": "латиська", - "lez": "лезгінська", - "lim": "лімбурганська", - "lin": "лінгала", - "lit": "литовська", - "lol": "монго", - "loz": "лозі", - "ltz": "люксембурзька", - "lua": "луба-лулуа", - "lub": "луба-катанга", - "lug": "ганда", - "lui": "луйсеньо", - "lun": "лунда", - "luo": "луо (Кенія і Танзанія)", - "lus": "лушай", - "mad": "мадурська", - "mag": "магахі", - "mah": "маршальська", - "mai": "майтхілі", - "mak": "макасарська", - "mal": "малаялам", - "man": "мандінго", - "mar": "мараті", - "mas": "масаї", - "mdf": "мокшанська", - "mdr": "мандарська", - "men": "менде (Сьєрра-Леоне)", - "mga": "середньоірландська (900-1200)", - "mic": "мікмак", - "min": "мінангкабау", - "mis": "мови без коду", - "mkd": "македонська", - "mlg": "малагасійська", - "mlt": "мальтійська", - "mnc": "манчжурська", - "mni": "маніпурська", - "moh": "мохаук", - "mon": "монгольська", - "mos": "мосі", - "mri": "маорійська", - "msa": "малайська (макромова)", - "mul": "мови; що належать до декількох родин", - "mus": "крікська", - "mwl": "мірандська", - "mwr": "марварі", - "mya": "бірманська", - "myv": "ерзянська", - "nap": "неаполітанська", - "nau": "науру", - "nav": "навахо", - "nbl": "південна ндебеле", - "nde": "північна ндебеле", - "ndo": "ндонга", - "nds": "нижньонімецька", - "nep": "непальська", - "new": "бхаса (Непал)", - "nia": "ніасійська", - "niu": "ніуе", - "nld": "голландська", - "nno": "норвезька нюноршк", - "nob": "норвезька букмол", - "nog": "ногайська", - "non": "давньонорвезька", - "nor": "норвезька", - "nqo": "н’ко", - "nso": "сото; північне", - "nwc": "неварі (давня)", - "nya": "ньянджа", - "nym": "ньямвезі", - "nyn": "ньянколе", - "nyo": "ньоро", - "nzi": "нзіма", - "oci": "оксітанська (після 1500)", - "oji": "оджибва", - "ori": "орія", - "orm": "оромо", - "osa": "оседжі", - "oss": "осетинська", - "ota": "оттоманська турецька (1500-1928)", - "pag": "пангасінан", - "pal": "пехлевійська", - "pam": "пампанга", - "pan": "пенджабі", - "pap": "папьяменто", - "pau": "палау", - "peo": "давньоперська (бл. 600-400 до н.е.)", - "phn": "фінікійська", - "pli": "палі", - "pol": "польська", - "pon": "понапе", - "por": "португальська", - "pro": "провансальська (давня; до 1500 року)", - "pus": "пуштунська", - "que": "кечуа", - "raj": "раджастхані", - "rap": "рапануї", - "rar": "маорійська (острови Кука)", - "roh": "ретророманська", - "rom": "ромська", - "ron": "румунська", - "run": "рунді", - "rup": "македоно-румунська", - "rus": "російська", - "sad": "сандаве", - "sag": "санго", - "sah": "якутська", - "sam": "арамейська (самаритянська)", - "san": "санскрит", - "sas": "сасакська", - "sat": "санталі", - "scn": "сицилійська", - "sco": "шотландська", - "sel": "селькупська", - "sga": "давньоірландська (до 900)", - "shn": "шан", - "sid": "сидама", - "sin": "сингалійська", - "slk": "словацька", - "slv": "словенська", - "sma": "саамська (південна)", - "sme": "саамська (північна)", - "smj": "лулесаамська", - "smn": "саамська (інарі)", - "smo": "самоанська", - "sms": "саамська (сколт)", - "sna": "шона", - "snd": "синдхі", - "snk": "сонікійська", - "sog": "согдійська", - "som": "сомалійська", - "sot": "сото; південна", - "spa": "іспанська", - "sqi": "албанська", - "srd": "сардинська", - "srn": "сранан-тонго", - "srp": "сербська", - "srr": "серер", - "ssw": "свазі", - "suk": "сукума", - "sun": "сунданська", - "sus": "сусу", - "sux": "шумерська", - "swa": "суахілі (макромова)", - "swe": "шведська", - "syc": "сирійська (класична)", - "syr": "сирійська", - "tah": "таїтянська", - "tam": "тамільська", - "tat": "татарська", - "tel": "телугу", - "tem": "тімне", - "ter": "терено", - "tet": "тетум", - "tgk": "таджицька", - "tgl": "тагалог", - "tha": "таїландська", - "tig": "тігре", - "tir": "тигринійська", - "tiv": "тиві", - "tkl": "токелау", - "tlh": "клінгонська", - "tli": "тлінгіт", - "tmh": "тамашек", - "tog": "тонга (ньяса)", - "ton": "тонга (острови Тонга)", - "tpi": "ток-пісін", - "tsi": "цимшіан", - "tsn": "тсвана", - "tso": "цонга", - "tuk": "туркменська", - "tum": "тумбука", - "tur": "турецька", - "tvl": "тувалу", - "twi": "тві", - "tyv": "тувінська", - "udm": "удмурдська", - "uga": "угаритська", - "uig": "уйгурська", - "ukr": "українська", - "umb": "умбунду", - "und": "невизначена", - "urd": "урду", - "uzb": "узбецька", - "vai": "вай", - "ven": "венда", - "vie": "в'єтнамська", - "vol": "волапюк", - "vot": "водська", - "wal": "волайтта", - "war": "варай (Філіппіни)", - "was": "вашо", - "wln": "валлонська", - "wol": "волоф", - "xal": "калмицька", - "xho": "хоза", - "yao": "яо", - "yap": "япська", - "yid": "ідиш", - "yor": "йоруба", - "zap": "сапотецька", - "zbl": "бліссимволіка", - "zen": "зеназька", - "zha": "чжуань", - "zho": "китайська", - "zul": "зулуська", - "zun": "зуні", - "zxx": "немає мовних даних", - "zza": "заза" - }, - "de": { - "aar": "Danakil-Sprache", - "abk": "Abchasisch", - "ace": "Aceh-Sprache", - "ach": "Acholi-Sprache", - "ada": "Adangme-Sprache", - "ady": "Adygisch", - "afh": "Afrihili", - "afr": "Afrikaans", - "ain": "Ainu-Sprache (Japan)", - "aka": "Akan-Sprache", - "akk": "Akkadisch", - "ale": "Aleutisch", - "alt": "Altaisch; Süd", - "amh": "Amharisch", - "ang": "Englisch; Alt (ca. 450-1100)", - "anp": "Anga-Sprache", - "ara": "Arabisch", - "arc": "Aramäisch", - "arg": "Aragonesisch", - "arn": "Mapudungun", - "arp": "Arapaho", - "arw": "Arawakisch", - "asm": "Assamesisch", - "ast": "Asturisch", - "ava": "Awarisch", - "ave": "Avestisch", - "awa": "Awadhi", - "aym": "Aymara", - "aze": "Aserbaidschanisch", - "bak": "Baschkirisch", - "bal": "Belutschisch", - "bam": "Bambara", - "ban": "Balinesisch", - "bas": "Basa (Kamerun)", - "bej": "Bedja (Bedauye)", - "bel": "Weißrussisch", - "bem": "Bemba (Sambia)", - "ben": "Bengalisch", - "bho": "Bhojpuri", - "bik": "Bikol", - "bin": "Bini", - "bis": "Bislama", - "bla": "Blackfoot", - "bod": "Tibetisch", - "bos": "Bosnisch", - "bra": "Braj-Bhakha", - "bre": "Bretonisch", - "bua": "Burjatisch", - "bug": "Buginesisch", - "bul": "Bulgarisch", - "byn": "Bilin", - "cad": "Caddo", - "car": "Karibisch; Galíbi", - "cat": "Katalanisch", - "ceb": "Cebuano", - "ces": "Tschechisch", - "cha": "Chamorro", - "chb": "Chibcha", - "che": "Tschetschenisch", - "chg": "Tschagataisch", - "chk": "Trukesisch", - "chm": "Mari (Russland)", - "chn": "Chinook", - "cho": "Choctaw", - "chp": "Chipewyan", - "chr": "Cherokee", - "chu": "Altkirchenslawisch", - "chv": "Tschuwaschisch", - "chy": "Cheyenne", - "cop": "Koptisch", - "cor": "Kornisch", - "cos": "Korsisch", - "cre": "Cree", - "crh": "Türkisch; Krimtatarisch", - "csb": "Kaschubisch", - "cym": "Walisisch", - "dak": "Dakota", - "dan": "Dänisch", - "dar": "Darginisch", - "del": "Delaware", - "den": "Slave (Athapaskisch)", - "deu": "Deutsch", - "dgr": "Dogrib", - "din": "Dinka", - "div": "Dhivehi", - "doi": "Dogri (Makrosprache)", - "dsb": "Sorbisch; Nieder", - "dua": "Duala", - "dum": "Niederländisch; Mittel (ca. 1050-1350)", - "dyu": "Dyula", - "dzo": "Dzongkha", - "efi": "Efik", - "egy": "Ägyptisch (Historisch)", - "eka": "Ekajuk", - "ell": "Neugriechisch (ab 1453)", - "elx": "Elamisch", - "eng": "Englisch", - "enm": "Mittelenglisch", - "epo": "Esperanto", - "est": "Estnisch", - "eus": "Baskisch", - "ewe": "Ewe-Sprache", - "ewo": "Ewondo", - "fan": "Fang (Äquatorial-Guinea)", - "fao": "Färöisch", - "fas": "Persisch", - "fat": "Fanti", - "fij": "Fidschianisch", - "fil": "Filipino", - "fin": "Finnisch", - "fon": "Fon", - "fra": "Französisch", - "frm": "Französisch; Mittel (ca. 1400 - 1600)", - "fro": "Französisch; Alt (842 - ca. 1400)", - "frr": "Friesisch; Nord", - "frs": "Friesisch; Ost", - "fry": "Friesisch; West", - "ful": "Ful", - "fur": "Friaulisch", - "gaa": "Ga", - "gay": "Gayo", - "gba": "Gbaya (Zentralafrikanische Republik)", - "gez": "Altäthiopisch", - "gil": "Gilbertesisch", - "gla": "Gälisch; Schottisch", - "gle": "Irisch", - "glg": "Galicisch", - "glv": "Manx", - "gmh": "Mittelhochdeutsch (ca. 1050-1500)", - "goh": "Althochdeutsch (ca. 750-1050)", - "gon": "Gondi", - "gor": "Gorontalesisch", - "got": "Gotisch", - "grb": "Grebo", - "grc": "Altgriechisch (bis 1453)", - "grn": "Guaraní", - "gsw": "Schweizerdeutsch", - "guj": "Gujarati", - "gwi": "Kutchin", - "hai": "Haida", - "hat": "Kreolisch; Haitisch", - "hau": "Haussa", - "haw": "Hawaiianisch", - "heb": "Hebräisch", - "her": "Herero", - "hil": "Hiligaynon", - "hin": "Hindi", - "hit": "Hethitisch", - "hmn": "Miao-Sprachen", - "hmo": "Hiri-Motu", - "hrv": "Kroatisch", - "hsb": "Obersorbisch", - "hun": "Ungarisch", - "hup": "Hupa", - "hye": "Armenisch", - "iba": "Iban", - "ibo": "Ibo", - "ido": "Ido", - "iii": "Yi; Sichuan", - "iku": "Inuktitut", - "ile": "Interlingue", - "ilo": "Ilokano", - "ina": "Interlingua (Internationale Hilfssprachen-Vereinigung)", - "ind": "Indonesisch", - "inh": "Inguschisch", - "ipk": "Inupiaq", - "isl": "Isländisch", - "ita": "Italienisch", - "jav": "Javanisch", - "jbo": "Lojban", - "jpn": "Japanisch", - "jpr": "Jüdisch-Persisch", - "jrb": "Jüdisch-Arabisch", - "kaa": "Karakalpakisch", - "kab": "Kabylisch", - "kac": "Kachinisch", - "kal": "Kalaallisut (Grönländisch)", - "kam": "Kamba (Kenia)", - "kan": "Kannada", - "kas": "Kaschmirisch", - "kat": "Georgisch", - "kau": "Kanuri", - "kaw": "Kawi; Altjavanisch", - "kaz": "Kasachisch", - "kbd": "Kabardisch", - "kha": "Khasi-Sprache", - "khm": "Khmer; Zentral", - "kho": "Sakisch", - "kik": "Kikuyu", - "kin": "Rwanda", - "kir": "Kirgisisch", - "kmb": "Mbundu; Kimbundu", - "kok": "Konkani (Makrosprache)", - "kom": "Komi", - "kon": "Kongo", - "kor": "Koreanisch", - "kos": "Kosraeanisch", - "kpe": "Kpelle", - "krc": "Karachay-Balkar", - "krl": "Karenisch", - "kru": "Kurukh", - "kua": "Kwanyama", - "kum": "Kumükisch", - "kur": "Kurdisch", - "kut": "Kutenai", - "lad": "Judenspanisch", - "lah": "Lahnda", - "lam": "Banjari; Lamba", - "lao": "Laotisch", - "lat": "Lateinisch", - "lav": "Lettisch", - "lez": "Lesgisch", - "lim": "Limburgisch", - "lin": "Lingala", - "lit": "Litauisch", - "lol": "Mongo", - "loz": "Rotse", - "ltz": "Luxemburgisch", - "lua": "Luba-Lulua", - "lub": "Luba-Katanga", - "lug": "Ganda", - "lui": "Luiseno", - "lun": "Lunda", - "luo": "Luo (Kenia und Tansania)", - "lus": "Lushai", - "mad": "Maduresisch", - "mag": "Khotta", - "mah": "Marshallesisch", - "mai": "Maithili", - "mak": "Makassarisch", - "mal": "Malayalam", - "man": "Mande; Mandigo; Malinke", - "mar": "Marathi", - "mas": "Massai", - "mdf": "Moksha", - "mdr": "Mandaresisch", - "men": "Mende (Sierra Leone)", - "mga": "Mittelirisch (900-1200)", - "mic": "Mikmak", - "min": "Minangkabau", - "mis": "Nichtklassifizierte Sprachen", - "mkd": "Makedonisch", - "mlg": "Madegassisch", - "mlt": "Maltesisch", - "mnc": "Manchu; Mandschurisch", - "mni": "Meithei-Sprache", - "moh": "Mohawk", - "mon": "Mongolisch", - "mos": "Mossi", - "mri": "Maori", - "msa": "Malaiisch (Makrosprache)", - "mul": "Mehrsprachig; Polyglott", - "mus": "Muskogee", - "mwl": "Mirandesisch", - "mwr": "Marwari", - "mya": "Burmesisch", - "myv": "Erzya", - "nap": "Neapolitanisch", - "nau": "Nauruanisch", - "nav": "Navajo", - "nbl": "Ndebele (Süd)", - "nde": "Ndebele (Nord)", - "ndo": "Ndonga", - "nds": "Plattdeutsch", - "nep": "Nepali", - "new": "Bhasa; Nepalesisch", - "nia": "Nias", - "niu": "Niue", - "nld": "Niederländisch", - "nno": "Nynorsk (Norwegen)", - "nob": "Norwegisch-Bokmål", - "nog": "Nogai", - "non": "Altnordisch", - "nor": "Norwegisch", - "nqo": "N'Ko", - "nso": "Sotho; Nord", - "nwc": "Newari; Alt", - "nya": "Nyanja", - "nym": "Nyamwezi", - "nyn": "Nyankole", - "nyo": "Nyoro", - "nzi": "Nzima", - "oci": "Okzitanisch (nach 1500)", - "oji": "Ojibwa", - "ori": "Orija", - "orm": "Oromo", - "osa": "Osage", - "oss": "Ossetisch", - "ota": "Ottomanisch (Osmanisch/Türkisch) (1500-1928)", - "pag": "Pangasinan", - "pal": "Mittelpersisch", - "pam": "Pampanggan", - "pan": "Panjabi", - "pap": "Papiamento", - "pau": "Palau", - "peo": "Persisch; Alt (ca. 600-400 v.Chr.)", - "phn": "Phönikisch", - "pli": "Pali", - "pol": "Polnisch", - "pon": "Ponapeanisch", - "por": "Portugiesisch", - "pro": "Altokzitanisch; Altprovenzalisch (bis 1500)", - "pus": "Paschtu; Afghanisch", - "que": "Ketschua", - "raj": "Rajasthani", - "rap": "Osterinsel-Sprache; Rapanui", - "rar": "Maori; Cook-Inseln", - "roh": "Bündnerromanisch", - "rom": "Romani; Zigeunersprache", - "ron": "Rumänisch", - "run": "Rundi", - "rup": "Rumänisch; Mezedonisch", - "rus": "Russisch", - "sad": "Sandawe", - "sag": "Sango", - "sah": "Jakutisch", - "sam": "Aramäisch; Samaritanisch", - "san": "Sanskrit", - "sas": "Sassak", - "sat": "Santali", - "scn": "Sizilianisch", - "sco": "Schottisch", - "sel": "Selkupisch", - "sga": "Altirisch (bis 900)", - "shn": "Schan", - "sid": "Sidamo", - "sin": "Singhalesisch", - "slk": "Slowakisch", - "slv": "Slowenisch", - "sma": "Sami; Süd", - "sme": "Nordsamisch", - "smj": "Samisch (Lule)", - "smn": "Samisch; Inari", - "smo": "Samoanisch", - "sms": "Samisch; Skolt", - "sna": "Schona", - "snd": "Sindhi", - "snk": "Soninke", - "sog": "Sogdisch", - "som": "Somali", - "sot": "Sotho (Süd)", - "spa": "Spanisch; Kastilianisch", - "sqi": "Albanisch", - "srd": "Sardisch", - "srn": "Sranan Tongo", - "srp": "Serbisch", - "srr": "Serer", - "ssw": "Swazi", - "suk": "Sukuma", - "sun": "Sundanesisch", - "sus": "Susu", - "sux": "Sumerisch", - "swa": "Swahili (Makrosprache)", - "swe": "Schwedisch", - "syc": "Syrisch; Klassisch", - "syr": "Syrisch", - "tah": "Tahitisch", - "tam": "Tamilisch", - "tat": "Tatarisch", - "tel": "Telugu", - "tem": "Temne", - "ter": "Tereno", - "tet": "Tetum", - "tgk": "Tadschikisch", - "tgl": "Tagalog", - "tha": "Thailändisch", - "tig": "Tigre", - "tir": "Tigrinja", - "tiv": "Tiv", - "tkl": "Tokelauanisch", - "tlh": "Klingonisch", - "tli": "Tlingit", - "tmh": "Tamaseq", - "tog": "Tonga (Nyasa)", - "ton": "Tonga (Tonga-Inseln)", - "tpi": "Neumelanesisch; Pidgin", - "tsi": "Tsimshian", - "tsn": "Tswana", - "tso": "Tsonga", - "tuk": "Turkmenisch", - "tum": "Tumbuka", - "tur": "Türkisch", - "tvl": "Elliceanisch", - "twi": "Twi", - "tyv": "Tuwinisch", - "udm": "Udmurt", - "uga": "Ugaritisch", - "uig": "Uigurisch", - "ukr": "Ukrainisch", - "umb": "Mbundu; Umbundu", - "und": "Unbestimmbar", - "urd": "Urdu", - "uzb": "Usbekisch", - "vai": "Vai", - "ven": "Venda", - "vie": "Vietnamesisch", - "vol": "Volapük", - "vot": "Wotisch", - "wal": "Wolaytta", - "war": "Waray (Philippinen)", - "was": "Washo", - "wln": "Wallonisch", - "wol": "Wolof", - "xal": "Kalmükisch", - "xho": "Xhosa", - "yao": "Yao", - "yap": "Yapesisch", - "yid": "Jiddisch", - "yor": "Joruba", - "zap": "Zapotekisch", - "zbl": "Bliss-Symbole", - "zen": "Zenaga", - "zha": "Zhuang", - "zho": "Chinesisch", - "zul": "Zulu", - "zun": "Zuni", - "zxx": "Kein sprachlicher Inhalt", - "zza": "Zaza" - }, "ja": { "aar": "アファル語", "abk": "アブハジア語", @@ -5427,6 +4167,1266 @@ LANGUAGE_NAMES = { "zxx": "No linguistic content", "zza": "Zaza" }, + "nl": { + "aar": "Afar; Hamitisch", + "abk": "Abchazisch", + "ace": "Achinees", + "ach": "Acholi", + "ada": "Adangme", + "ady": "Adyghe", + "afh": "Afrihili", + "afr": "Afrikaans", + "ain": "Ainu (Japan)", + "aka": "Akaans", + "akk": "Akkadiaans", + "ale": "Aleut", + "alt": "Altajs; zuidelijk", + "amh": "Amhaars; Amharisch", + "ang": "Engels; oud (ca. 450-1100)", + "anp": "Angika", + "ara": "Arabisch", + "arc": "Aramees; officieel (700-300 B.C.)", + "arg": "Aragonees", + "arn": "Mapudungun", + "arp": "Arapaho", + "arw": "Arawak", + "asm": "Assamees; Assami", + "ast": "Asturisch", + "ava": "Avaars; Awari", + "ave": "Avestisch", + "awa": "Awadhi", + "aym": "Aymara", + "aze": "Azerbeidzjaans", + "bak": "Basjkiers; Basjkirisch", + "bal": "Balutsji; Baluchi", + "bam": "Bambara", + "ban": "Balinees", + "bas": "Basa (Kameroen)", + "bej": "Beja", + "bel": "Wit-Russisch; Belarussisch", + "bem": "Bemba (Zambia)", + "ben": "Bengaals", + "bho": "Bhojpuri", + "bik": "Bikol", + "bin": "Bini; Edo", + "bis": "Bislama", + "bla": "Siksika", + "bod": "Tibetaans", + "bos": "Bosnisch", + "bra": "Braj", + "bre": "Bretons; Bretoens", + "bua": "Boeriaats", + "bug": "Buginees", + "bul": "Bulgaars", + "byn": "Bilin", + "cad": "Caddo", + "car": "Caribische talen", + "cat": "Catalaans", + "ceb": "Cebuano", + "ces": "Tsjechisch", + "cha": "Chamorro", + "chb": "Tsjibtsja", + "che": "Tsjetsjeens", + "chg": "Chagatai", + "chk": "Chukees", + "chm": "Mari (Rusland)", + "chn": "Chinook-jargon", + "cho": "Choctaw", + "chp": "Chipewyaans", + "chr": "Cherokee", + "chu": "Slavisch; oud (kerk)", + "chv": "Tsjoevasjisch", + "chy": "Cheyenne", + "cop": "Koptisch", + "cor": "Cornisch", + "cos": "Corsicaans", + "cre": "Cree", + "crh": "Turks; Crimean", + "csb": "Kasjoebiaans", + "cym": "Welsh", + "dak": "Dakota", + "dan": "Deens", + "dar": "Dargwa", + "del": "Delaware", + "den": "Slavisch (Athapascaans)", + "deu": "Duits", + "dgr": "Dogrib", + "din": "Dinka", + "div": "Divehi", + "doi": "Dogri", + "dsb": "Sorbisch; lager", + "dua": "Duala", + "dum": "Nederlands; middel (ca. 1050-1350)", + "dyu": "Dyula", + "dzo": "Dzongkha", + "efi": "Efikisch", + "egy": "Egyptisch (antiek)", + "eka": "Ekajuk", + "ell": "Grieks; Modern (1453-)", + "elx": "Elamitisch", + "eng": "Engels", + "enm": "Engels; middel (1100-1500)", + "epo": "Esperanto", + "est": "Estlands", + "eus": "Baskisch", + "ewe": "Ewe", + "ewo": "Ewondo", + "fan": "Fang", + "fao": "Faeröers", + "fas": "Perzisch", + "fat": "Fanti", + "fij": "Fijisch", + "fil": "Filipijns", + "fin": "Fins", + "fon": "Fon", + "fra": "Frans", + "frm": "Frans; middel (ca. 1400-1600)", + "fro": "Frans; oud (842-ca. 1400)", + "frr": "Fries; noordelijk (Duitsland)", + "frs": "Fries; oostelijk (Duitsland)", + "fry": "Fries", + "ful": "Fulah", + "fur": "Friulisch", + "gaa": "Ga", + "gay": "Gayo", + "gba": "Gbaya (Centraal Afrikaanse Republiek)", + "gez": "Ge'ez", + "gil": "Gilbertees", + "gla": "Keltisch; schots", + "gle": "Iers", + "glg": "Galiciaans", + "glv": "Manx", + "gmh": "Duits; middel hoog (ca. 1050-1500)", + "goh": "Duits; oud hoog (ca. 750-1050)", + "gon": "Gondi", + "gor": "Gorontalo", + "got": "Gothisch", + "grb": "Grebo", + "grc": "Grieks; antiek (tot 1453)", + "grn": "Guarani", + "gsw": "Duits; Zwitserland", + "guj": "Gujarati", + "gwi": "Gwichʼin", + "hai": "Haida", + "hat": "Creools; Haïtiaans", + "hau": "Hausa", + "haw": "Hawaiiaans", + "heb": "Hebreeuws", + "her": "Herero", + "hil": "Hiligainoons", + "hin": "Hindi", + "hit": "Hittitisch", + "hmn": "Hmong", + "hmo": "Hiri Motu", + "hrv": "Kroatisch", + "hsb": "Servisch; hoger", + "hun": "Hongaars", + "hup": "Hupa", + "hye": "Armeens", + "iba": "Ibaans", + "ibo": "Igbo", + "ido": "Ido", + "iii": "Yi; Sichuan - Nuosu", + "iku": "Inuktitut", + "ile": "Interlingue", + "ilo": "Iloko", + "ina": "Interlingua (International Auxiliary Language Association)", + "ind": "Indonesisch", + "inh": "Ingoesjetisch", + "ipk": "Inupiak", + "isl": "IJslands", + "ita": "Italiaans", + "jav": "Javaans", + "jbo": "Lojbaans", + "jpn": "Japans", + "jpr": "Joods-Perzisch", + "jrb": "Joods-Arabisch", + "kaa": "Kara-Kalpak", + "kab": "Kabyle", + "kac": "Katsjin", + "kal": "Groenlands", + "kam": "Kamba (Kenya)", + "kan": "Kannada; Kanara; Kanarees", + "kas": "Kashmiri", + "kat": "Georgisch", + "kau": "Kanuri", + "kaw": "Kawi", + "kaz": "Kazachs", + "kbd": "Kabardisch; Tsjerkessisch", + "kha": "Khasi", + "khm": "Khmer, Cambodjaans", + "kho": "Khotanees", + "kik": "Kikuyu", + "kin": "Kinyarwanda", + "kir": "Kirgizisch", + "kmb": "Kimbundu", + "kok": "Konkani", + "kom": "Komi", + "kon": "Kikongo", + "kor": "Koreaans", + "kos": "Kosraeaans", + "kpe": "Kpelle", + "krc": "Karatsjay-Balkar", + "krl": "Karelisch", + "kru": "Kurukh", + "kua": "Kuanyama", + "kum": "Kumyk", + "kur": "Koerdisch", + "kut": "Kutenaïsch", + "lad": "Ladino", + "lah": "Lahnda", + "lam": "Lamba", + "lao": "Laotiaans", + "lat": "Latijn", + "lav": "Lets", + "lez": "Lezghiaans", + "lim": "Limburgs", + "lin": "Lingala", + "lit": "Litouws", + "lol": "Mongo", + "loz": "Lozi", + "ltz": "Luxemburgs", + "lua": "Luba-Lulua", + "lub": "Luba-Katanga", + "lug": "Luganda", + "lui": "Luiseno", + "lun": "Lunda", + "luo": "Luo (Kenia en Tanzania)", + "lus": "Lushai", + "mad": "Madurees", + "mag": "Magahisch", + "mah": "Marshallees", + "mai": "Maithili", + "mak": "Makasar", + "mal": "Malayalam", + "man": "Mandingo", + "mar": "Marathi", + "mas": "Masai", + "mdf": "Moksja", + "mdr": "Mandars", + "men": "Mende", + "mga": "Iers; middel (900-1200)", + "mic": "Mi'kmaq; Micmac", + "min": "Minangkabau", + "mis": "Niet-gecodeerde talen", + "mkd": "Macedonisch", + "mlg": "Malagassisch", + "mlt": "Maltees", + "mnc": "Manchu", + "mni": "Manipuri", + "moh": "Mohawk", + "mon": "Mongools", + "mos": "Mossisch", + "mri": "Maori", + "msa": "Maleis", + "mul": "Meerdere talen", + "mus": "Creek", + "mwl": "Mirandees", + "mwr": "Marwari", + "mya": "Burmees", + "myv": "Erzya", + "nap": "Napolitaans", + "nau": "Nauruaans", + "nav": "Navajo", + "nbl": "Ndebele; zuid", + "nde": "Ndebele; noord", + "ndo": "Ndonga", + "nds": "Duits; Laag", + "nep": "Nepalees", + "new": "Newari; Nepal", + "nia": "Nias", + "niu": "Niueaans", + "nld": "Nederlands", + "nno": "Noors; Nynorsk", + "nob": "Noors; Bokmål", + "nog": "Nogai", + "non": "Noors; oud", + "nor": "Noors", + "nqo": "N'Ko", + "nso": "Pedi; Sepedi; Noord-Sothotisch", + "nwc": "Newari; Klassiek Nepal", + "nya": "Nyanja", + "nym": "Nyamwezi", + "nyn": "Nyankools", + "nyo": "Nyoro", + "nzi": "Nzima", + "oci": "Occitaans (na 1500)", + "oji": "Ojibwa", + "ori": "Oriya", + "orm": "Oromo", + "osa": "Osaags", + "oss": "Ossetisch", + "ota": "Turks; ottomaans (1500-1928)", + "pag": "Pangasinaans", + "pal": "Pehlevi", + "pam": "Pampanga", + "pan": "Punjabi", + "pap": "Papiamento", + "pau": "Palauaans", + "peo": "Perzisch; oud (ca. 600-400 B.C.)", + "phn": "Foenisisch", + "pli": "Pali", + "pol": "Pools", + "pon": "Pohnpeiaans", + "por": "Portugees", + "pro": "Provençaals; oud (tot 1500)", + "pus": "Poesjto", + "que": "Quechua", + "raj": "Rajasthani", + "rap": "Rapanui", + "rar": "Rarotongan; Cookeilanden Maori", + "roh": "Reto-Romaans", + "rom": "Romani", + "ron": "Roemeens", + "run": "Rundi", + "rup": "Roemeens; Macedo-", + "rus": "Russisch", + "sad": "Sandawe", + "sag": "Sangho", + "sah": "Jakoets", + "sam": "Aramees; Samaritaans", + "san": "Sanskriet", + "sas": "Sasaaks", + "sat": "Santali", + "scn": "Siciliaans", + "sco": "Schots", + "sel": "Sulkoeps", + "sga": "Iers; oud (tot 900)", + "shn": "Sjaans", + "sid": "Sidamo", + "sin": "Sinhala", + "slk": "Slowaaks", + "slv": "Sloveens", + "sma": "Samisch; zuid, Laps; zuid", + "sme": "Samisch; noord, Laps; noord", + "smj": "Lule Sami", + "smn": "Sami; Inari, Laps; Inari", + "smo": "Samoaans", + "sms": "Sami; Skolt, Laps; Skolt", + "sna": "Shona", + "snd": "Sindhi", + "snk": "Soninke", + "sog": "Sogdiaans", + "som": "Somalisch", + "sot": "Sothaans; zuidelijk", + "spa": "Spaans", + "sqi": "Albanees", + "srd": "Sardinisch", + "srn": "Sranan Tongo", + "srp": "Servisch", + "srr": "Serer", + "ssw": "Swati", + "suk": "Sukuma", + "sun": "Soendanees; Sundanees", + "sus": "Susu", + "sux": "Sumerisch", + "swa": "Swahili", + "swe": "Zweeds", + "syc": "Syriac; Klassiek", + "syr": "Syrisch", + "tah": "Tahitisch", + "tam": "Tamil", + "tat": "Tataars", + "tel": "Telugu", + "tem": "Timne", + "ter": "Tereno", + "tet": "Tetum", + "tgk": "Tadzjieks", + "tgl": "Tagalog", + "tha": "Thai", + "tig": "Tigre", + "tir": "Tigrinya", + "tiv": "Tiv", + "tkl": "Tokelau", + "tlh": "Klingon; tlhIngan-Hol", + "tli": "Tlingit", + "tmh": "Tamasjek", + "tog": "Tonga (Nyasa)", + "ton": "Tonga (Tonga-eilanden)", + "tpi": "Tok Pisin", + "tsi": "Tsimsjiaans", + "tsn": "Tswana", + "tso": "Tsonga", + "tuk": "Turkmeens", + "tum": "Tumbuka", + "tur": "Turks", + "tvl": "Tuvalu", + "twi": "Twi", + "tyv": "Tuviniaans", + "udm": "Udmurts", + "uga": "Ugaritisch", + "uig": "Oeigoers; Oejgoers", + "ukr": "Oekraïens", + "umb": "Umbundu", + "und": "Onbepaald", + "urd": "Urdu", + "uzb": "Oezbeeks", + "vai": "Vai", + "ven": "Venda", + "vie": "Vietnamees", + "vol": "Volapük", + "vot": "Votisch", + "wal": "Walamo", + "war": "Waray (Filipijns)", + "was": "Wasjo", + "wln": "Waals", + "wol": "Wolof", + "xal": "Kalmyk", + "xho": "Xhosa", + "yao": "Yao", + "yap": "Yapees", + "yid": "Jiddisch", + "yor": "Yoruba", + "zap": "Zapotec", + "zbl": "Blissymbolen", + "zen": "Zenaga", + "zha": "Zhuang, Tsjoeang", + "zho": "Chinees", + "zul": "Zoeloe", + "zun": "Zuni", + "zxx": "Geen linguïstische inhoud", + "zza": "Zaza" + }, + "pl": { + "aar": "afarski", + "abk": "abchaski", + "ace": "aczineski", + "ach": "aczoli", + "ada": "adangme", + "ady": "adygejski", + "afh": "afrihili", + "afr": "afrykanerski", + "ain": "ajnoski (Japonia)", + "aka": "akan", + "akk": "akadyjski", + "ale": "aleucki", + "alt": "ałtajski południowy", + "amh": "amharski", + "ang": "Staroangielski (ok. 450-1100)", + "anp": "angika", + "ara": "arabski", + "arc": "aramejski oficjalny (700-300 p.n.e.)", + "arg": "aragoński", + "arn": "araukański", + "arp": "arapaho", + "arw": "arawak", + "asm": "asamski", + "ast": "asturyjski", + "ava": "awarski", + "ave": "awestyjski", + "awa": "awadhi", + "aym": "ajmara", + "aze": "azerski", + "bak": "baszkirski", + "bal": "baluczi", + "bam": "bambara", + "ban": "balijski", + "bas": "basa (Kamerun)", + "bej": "bedża", + "bel": "białoruski", + "bem": "bemba (Zambia)", + "ben": "bengalski", + "bho": "bhodźpuri", + "bik": "bikol", + "bin": "edo", + "bis": "bislama", + "bla": "siksika", + "bod": "tybetański", + "bos": "bośniacki", + "bra": "bradź", + "bre": "bretoński", + "bua": "buriacki", + "bug": "bugijski", + "bul": "bułgarski", + "byn": "blin", + "cad": "kaddo", + "car": "karaibski galibi", + "cat": "kataloński", + "ceb": "cebuański", + "ces": "czeski", + "cha": "czamorro", + "chb": "czibcza", + "che": "czeczeński", + "chg": "czagatajski", + "chk": "chuuk", + "chm": "maryjski (Rosja)", + "chn": "żargon chinoocki", + "cho": "czoktaw", + "chp": "chipewyan", + "chr": "czerokeski", + "chu": "starosłowiański", + "chv": "czuwaski", + "chy": "czejeński", + "cop": "koptyjski", + "cor": "kornijski", + "cos": "korsykański", + "cre": "kri", + "crh": "krymskotatarski", + "csb": "kaszubski", + "cym": "walijski", + "dak": "dakota", + "dan": "duński", + "dar": "dargwijski", + "del": "delaware", + "den": "slavey (atapaskański)", + "deu": "niemiecki", + "dgr": "dogrib", + "din": "dinka", + "div": "malediwski; divehi", + "doi": "dogri (makrojęzyk)", + "dsb": "dolnołużycki", + "dua": "duala", + "dum": "holenderski średniowieczny (ok. 1050-1350)", + "dyu": "diula", + "dzo": "dzongka", + "efi": "efik", + "egy": "egipski (starożytny)", + "eka": "ekajuk", + "ell": "grecki współczesny (1453-)", + "elx": "elamicki", + "eng": "Angielski", + "enm": "angielski średniowieczny (1100-1500)", + "epo": "esperanto", + "est": "estoński", + "eus": "baskijski", + "ewe": "ewe", + "ewo": "ewondo", + "fan": "fang (Gwinea Równikowa)", + "fao": "farerski", + "fas": "perski", + "fat": "fanti", + "fij": "fidżyjski", + "fil": "pilipino", + "fin": "fiński", + "fon": "fon", + "fra": "francuski", + "frm": "francuski średniowieczny (ok. 1400-1600)", + "fro": "starofrancuski (842-ok. 1400)", + "frr": "północnofryzyjski", + "frs": "wschodniofryzyjski", + "fry": "zachodniofryzyjski", + "ful": "fulani", + "fur": "friulski", + "gaa": "ga", + "gay": "gayo", + "gba": "gbaya (Republika Środkowoafrykańska)", + "gez": "gyyz", + "gil": "gilbertański", + "gla": "szkocki gaelicki", + "gle": "irlandzki", + "glg": "galicyjski", + "glv": "manx", + "gmh": "średnio-wysoko-niemiecki (ok. 1050-1500)", + "goh": "staro-wysoko-niemiecki (ok. 750-1050)", + "gon": "gondi", + "gor": "gorontalo", + "got": "gocki", + "grb": "grebo", + "grc": "grecki starożytny (do 1453)", + "grn": "guarani", + "gsw": "niemiecki szwajcarski", + "guj": "gudźarati", + "gwi": "gwichʼin", + "hai": "haida", + "hat": "kreolski haitański", + "hau": "hausa", + "haw": "hawajski", + "heb": "hebrajski", + "her": "herero", + "hil": "hiligajnon", + "hin": "hindi", + "hit": "hetycki", + "hmn": "hmong", + "hmo": "hiri motu", + "hrv": "chorwacki", + "hsb": "górnołużycki", + "hun": "węgierski", + "hup": "hupa", + "hye": "ormiański", + "iba": "ibanag", + "ibo": "ibo", + "ido": "ido", + "iii": "syczuański", + "iku": "inuktitut", + "ile": "interlingue", + "ilo": "ilokano", + "ina": "interlingua (Międzynarodowe Stowarzyszenie Języka Pomocniczego)", + "ind": "indonezyjski", + "inh": "inguski", + "ipk": "inupiaq", + "isl": "islandzki", + "ita": "włoski", + "jav": "jawajski", + "jbo": "lojban", + "jpn": "japoński", + "jpr": "judeo-perski", + "jrb": "judeoarabski", + "kaa": "karakałpacki", + "kab": "kabylski", + "kac": "kaczin", + "kal": "kalaallisut", + "kam": "kamba (Kenia)", + "kan": "kannada", + "kas": "kaszmirski", + "kat": "gruziński", + "kau": "kanuri", + "kaw": "kawi", + "kaz": "kazaski", + "kbd": "kabardyjski", + "kha": "khasi", + "khm": "środkowokhmerski", + "kho": "chotański", + "kik": "kikiju", + "kin": "ruanda", + "kir": "kirgiski", + "kmb": "kimbundu", + "kok": "konkani (makrojęzyk)", + "kom": "komi", + "kon": "kongo", + "kor": "koreański", + "kos": "kosrae", + "kpe": "kpelle", + "krc": "karaczajsko-bałkarski", + "krl": "karelski", + "kru": "kurukh", + "kua": "kwanyama", + "kum": "kumycki", + "kur": "kurdyjski", + "kut": "kutenai", + "lad": "ladino", + "lah": "lahnda", + "lam": "lamba", + "lao": "laotański", + "lat": "łaciński", + "lav": "łotewski", + "lez": "lezgiński", + "lim": "limburgijski", + "lin": "lingala", + "lit": "litewski", + "lol": "mongo", + "loz": "lozi", + "ltz": "luksemburski", + "lua": "luba-lulua", + "lub": "luba-katanga", + "lug": "luganda", + "lui": "luiseno", + "lun": "lunda", + "luo": "luo (Kenia i Tanzania)", + "lus": "lushai", + "mad": "madurajski", + "mag": "magahi", + "mah": "marshalski", + "mai": "maithili", + "mak": "makasar", + "mal": "malajalam", + "man": "mandingo", + "mar": "marathi", + "mas": "masajski", + "mdf": "moksza", + "mdr": "mandar", + "men": "mende (Sierra Leone)", + "mga": "irlandzki średniowieczny (900-1200)", + "mic": "micmac", + "min": "minangkabau", + "mis": "języki niezakodowane", + "mkd": "macedoński", + "mlg": "malgaski", + "mlt": "maltański", + "mnc": "mandżurski", + "mni": "manipuri", + "moh": "mohawk", + "mon": "mongolski", + "mos": "mossi", + "mri": "maoryski", + "msa": "malajski (makrojęzyk)", + "mul": "wiele języków", + "mus": "krik", + "mwl": "mirandyjski", + "mwr": "marwari", + "mya": "birmański", + "myv": "erzja", + "nap": "neapolitański", + "nau": "nauruański", + "nav": "navaho", + "nbl": "ndebele południowy", + "nde": "ndebele północny", + "ndo": "ndonga", + "nds": "German; Low", + "nep": "nepalski", + "new": "newarski", + "nia": "nias", + "niu": "niue", + "nld": "holenderski", + "nno": "norweski Nynorsk", + "nob": "norweski Bokmål", + "nog": "nogajski", + "non": "staronordyjski", + "nor": "norweski", + "nqo": "n’ko", + "nso": "sotho północny", + "nwc": "newarski klasyczny", + "nya": "njandża", + "nym": "nyamwezi", + "nyn": "nyankole", + "nyo": "nyoro", + "nzi": "nzema", + "oci": "okcytański (po 1500)", + "oji": "odżibwe", + "ori": "orija", + "orm": "oromo", + "osa": "osage", + "oss": "osetyjski", + "ota": "turecki otomański (1500-1928)", + "pag": "pangasino", + "pal": "pahlawi", + "pam": "pampango", + "pan": "pendżabski", + "pap": "papiamento", + "pau": "palau", + "peo": "staroperski (ok. 600-400 p.n.e)", + "phn": "fenicki", + "pli": "pali", + "pol": "Polski", + "pon": "pohnpei", + "por": "portugalski", + "pro": "prowansalski średniowieczny (do 1500)", + "pus": "paszto", + "que": "keczua", + "raj": "radźasthani", + "rap": "rapanui", + "rar": "maoryski Wysp Cooka", + "roh": "retoromański", + "rom": "romski", + "ron": "rumuński", + "run": "rundi", + "rup": "arumuński", + "rus": "rosyjski", + "sad": "sandawe", + "sag": "sango", + "sah": "jakucki", + "sam": "samarytański aramejski", + "san": "sanskryt", + "sas": "sasak", + "sat": "santali", + "scn": "sycylijski", + "sco": "scots", + "sel": "selkupski", + "sga": "staroirlandzki (do 900)", + "shn": "szan", + "sid": "sidamo", + "sin": "syngaleski", + "slk": "słowacki", + "slv": "słoweński", + "sma": "południowolapoński", + "sme": "północnolapoński", + "smj": "lapoński lule", + "smn": "lapoński inari", + "smo": "samoański", + "sms": "lapoński skolt", + "sna": "shona", + "snd": "sindhi", + "snk": "soninke", + "sog": "sogdiański", + "som": "somalijski", + "sot": "sotho południowy", + "spa": "hiszpański", + "sqi": "albański", + "srd": "sardyński", + "srn": "sranan tongo", + "srp": "serbski", + "srr": "serer", + "ssw": "suazi", + "suk": "sukuma", + "sun": "sundajski", + "sus": "susu", + "sux": "sumeryjski", + "swa": "suahili (makrojęzyk)", + "swe": "szwedzki", + "syc": "syryjski klasyczny", + "syr": "syryjski", + "tah": "tahitański", + "tam": "tamilski", + "tat": "tatarski", + "tel": "telugu", + "tem": "temne", + "ter": "tereno", + "tet": "tetum", + "tgk": "tadżycki", + "tgl": "tagalski", + "tha": "tajski", + "tig": "tigre", + "tir": "tigrinia", + "tiv": "tiw", + "tkl": "tokelau", + "tlh": "klingoński", + "tli": "tlingit", + "tmh": "tuareski", + "tog": "tongański (Nyasa)", + "ton": "tongański (Wyspy Tonga)", + "tpi": "tok pisin", + "tsi": "tsimszian", + "tsn": "tswana", + "tso": "tsonga", + "tuk": "turkmeński", + "tum": "tumbuka", + "tur": "turecki", + "tvl": "tuvalu", + "twi": "twi", + "tyv": "tuwiński", + "udm": "udmurcki", + "uga": "ugarycki", + "uig": "ujgurski", + "ukr": "ukraiński", + "umb": "umbundu", + "und": "nieokreślony", + "urd": "urdu", + "uzb": "uzbecki", + "vai": "wai", + "ven": "venda", + "vie": "wietnamski", + "vol": "wolapik", + "vot": "wotycki", + "wal": "walamo", + "war": "warajski (Filipiny)", + "was": "washo", + "wln": "waloński", + "wol": "wolof", + "xal": "kałmucki", + "xho": "xhosa", + "yao": "yao", + "yap": "japski", + "yid": "jidysz", + "yor": "joruba", + "zap": "zapotecki", + "zbl": "bliss", + "zen": "zenaga", + "zha": "zhuang", + "zho": "chiński", + "zul": "zuluski", + "zun": "zuni", + "zxx": "brak kontekstu językowego", + "zza": "zazaki" + }, + "ru": { + "aar": "Афар", + "abk": "Абхазский", + "ace": "Ачехский", + "ach": "Ачоли", + "ada": "Адангме", + "ady": "Адыгейский", + "afh": "Африхили", + "afr": "Африкаанс", + "ain": "Ainu (Japan)", + "aka": "Акан", + "akk": "Аккадский", + "ale": "Алеутский", + "alt": "Altai; Southern", + "amh": "Амхарский (Амаринья)", + "ang": "English; Old (ca. 450-1100)", + "anp": "Анжика", + "ara": "Арабский", + "arc": "Арамейский; Официальный", + "arg": "Арагонский", + "arn": "Mapudungun", + "arp": "Арапахо", + "arw": "Аравакский", + "asm": "Ассамский", + "ast": "Астурийский", + "ava": "Аварский", + "ave": "Авестийский", + "awa": "Авадхи", + "aym": "Аймара", + "aze": "Азербайджанский", + "bak": "Башкирский", + "bal": "Baluchi", + "bam": "Бамбара", + "ban": "Балийский", + "bas": "Баса (Камерун)", + "bej": "Беджа", + "bel": "Белорусский", + "bem": "Бемба (Замбия)", + "ben": "Бенгальский", + "bho": "Бходжпури", + "bik": "Бикольский", + "bin": "Бини", + "bis": "Бислама", + "bla": "Сиксика", + "bod": "Тибетский", + "bos": "Боснийский", + "bra": "Браун", + "bre": "Бретонский", + "bua": "Бурятский", + "bug": "Бугийский", + "bul": "Болгарский", + "byn": "Bilin", + "cad": "Каддо", + "car": "Carib; Galibi", + "cat": "Каталанский", + "ceb": "Себуано", + "ces": "Чешский", + "cha": "Чаморро", + "chb": "Чибча", + "che": "Чеченский", + "chg": "Чагатайский", + "chk": "Трукский", + "chm": "Марийский (Россия)", + "chn": "Чинук жаргон", + "cho": "Чоктав", + "chp": "Чипевианский", + "chr": "Чероки", + "chu": "Slavonic; Old", + "chv": "Чувашский", + "chy": "Чейенн", + "cop": "Коптский", + "cor": "Корнский", + "cos": "Корсиканский", + "cre": "Кри", + "crh": "Turkish; Crimean", + "csb": "Кашубианский", + "cym": "Уэльский (Валлийский)", + "dak": "Дакота", + "dan": "Датский", + "dar": "Даргва", + "del": "Делаварский", + "den": "Атапачские языки", + "deu": "Немецкий", + "dgr": "Догриб", + "din": "Динка", + "div": "Dhivehi", + "doi": "Dogri (macrolanguage)", + "dsb": "Sorbian; Lower", + "dua": "Дуала", + "dum": "Dutch; Middle (ca. 1050-1350)", + "dyu": "Диула (Дьюла)", + "dzo": "Дзонг-кэ", + "efi": "Эфик", + "egy": "Древнеегипетский", + "eka": "Экаджук", + "ell": "Новогреческий (с 1453)", + "elx": "Эламский", + "eng": "Английский", + "enm": "Среднеанглийский (1100-1500)", + "epo": "Эсперанто", + "est": "Эстонский", + "eus": "Баскский", + "ewe": "Эве", + "ewo": "Эвондо", + "fan": "Fang (Equatorial Guinea)", + "fao": "Фарерский", + "fas": "Персидский", + "fat": "Фанти", + "fij": "Фиджийский", + "fil": "Filipino", + "fin": "Финский", + "fon": "Фон", + "fra": "Французский", + "frm": "French; Middle (ca. 1400-1600)", + "fro": "French; Old (842-ca. 1400)", + "frr": "Frisian; Northern", + "frs": "Frisian; Eastern", + "fry": "Frisian; Western", + "ful": "Фулах", + "fur": "Фриулианский", + "gaa": "Га", + "gay": "Гайо", + "gba": "Gbaya (Central African Republic)", + "gez": "Геэз", + "gil": "Гильбертский", + "gla": "Gaelic; Scottish", + "gle": "Ирландский", + "glg": "Galician", + "glv": "Мэнкский", + "gmh": "German; Middle High (ca. 1050-1500)", + "goh": "German; Old High (ca. 750-1050)", + "gon": "Гонди", + "gor": "Горонтало", + "got": "Готский", + "grb": "Гребо", + "grc": "Древнегреческий (по 1453)", + "grn": "Гуарани", + "gsw": "German; Swiss", + "guj": "Гуджарати", + "gwi": "Gwichʼin", + "hai": "Хайда", + "hat": "Creole; Haitian", + "hau": "Хауса", + "haw": "Гавайский", + "heb": "Иврит", + "her": "Гереро", + "hil": "Хилигайнон", + "hin": "Хинди", + "hit": "Хиттит", + "hmn": "Хмонг", + "hmo": "Хири Моту", + "hrv": "Хорватский", + "hsb": "Sorbian; Upper", + "hun": "Венгерский", + "hup": "Хупа", + "hye": "Армянский", + "iba": "Ибанский", + "ibo": "Игбо", + "ido": "Идо", + "iii": "Yi; Sichuan", + "iku": "Инуктитут", + "ile": "Интерлингве", + "ilo": "Илоко", + "ina": "Интерлингва (Ассоциация международного вспомогательного языка)", + "ind": "Индонезийский", + "inh": "Ингушский", + "ipk": "Инулиак", + "isl": "Исландский", + "ita": "Итальянский", + "jav": "Яванский", + "jbo": "Лоджбан", + "jpn": "Японский", + "jpr": "Еврейско-персидский", + "jrb": "Еврейско-арабский", + "kaa": "Каракалпакский", + "kab": "Кабильский", + "kac": "Качинский", + "kal": "Kalaallisut", + "kam": "Kamba (Kenya)", + "kan": "Каннада", + "kas": "Кашмири", + "kat": "Грузинский", + "kau": "Канури", + "kaw": "Кави", + "kaz": "Казахский", + "kbd": "Кабардинский", + "kha": "Кхаси", + "khm": "Khmer; Central", + "kho": "Хотанский", + "kik": "Кикуйю", + "kin": "Киньяруанда", + "kir": "Киргизский", + "kmb": "Кимбунду", + "kok": "Konkani (macrolanguage)", + "kom": "Коми", + "kon": "Конго", + "kor": "Корейский", + "kos": "Косраинский", + "kpe": "Кпелле", + "krc": "Карачаево-балкарский", + "krl": "Карельский", + "kru": "Курух", + "kua": "Киньяма", + "kum": "Кумыкский", + "kur": "Курдский", + "kut": "Кутенаи", + "lad": "Ладино", + "lah": "Лахнда", + "lam": "Ламба", + "lao": "Лаосский", + "lat": "Латинский", + "lav": "Латвийский", + "lez": "Лезгинский", + "lim": "Limburgan", + "lin": "Лингала", + "lit": "Литовский", + "lol": "Монго", + "loz": "Лози", + "ltz": "Luxembourgish", + "lua": "Луба-Лулуа", + "lub": "Луба-Катанга", + "lug": "Ганда", + "lui": "Луисеньо", + "lun": "Лунда", + "luo": "Луо (Кения и Танзания)", + "lus": "Лушай", + "mad": "Мадурский", + "mag": "Магахи", + "mah": "Marshallese", + "mai": "Майтхили", + "mak": "Макассарский", + "mal": "Малаялам", + "man": "Мандинго", + "mar": "Маратхи", + "mas": "Масаи", + "mdf": "Мокшанский", + "mdr": "Мандарский", + "men": "Mende (Sierra Leone)", + "mga": "Среднеирландский (900-1200)", + "mic": "Mi'kmaq", + "min": "Минангкабау", + "mis": "Uncoded languages", + "mkd": "Македонский", + "mlg": "Малагаси", + "mlt": "Мальтийский", + "mnc": "Манчу", + "mni": "Манипури", + "moh": "Мохаук", + "mon": "Монгольский", + "mos": "Моей", + "mri": "Маори", + "msa": "Malay (macrolanguage)", + "mul": "Разных семей языки", + "mus": "Крик", + "mwl": "Мирандские", + "mwr": "Марвари", + "mya": "Бирманский", + "myv": "Эрзянский", + "nap": "Неаполитанский", + "nau": "Науру", + "nav": "Navajo", + "nbl": "Ндебеле южный", + "nde": "Ндебеле северный", + "ndo": "Ндунга", + "nds": "German; Low", + "nep": "Непальский", + "new": "Bhasa; Nepal", + "nia": "Ниас", + "niu": "Ниуэ", + "nld": "Нидерландский", + "nno": "Норвежский Нюнорск", + "nob": "Norwegian Bokmål", + "nog": "Ногайский", + "non": "Старонорвежский", + "nor": "Норвежский", + "nqo": "Н'ко", + "nso": "Sotho; Northern", + "nwc": "Newari; Old", + "nya": "Nyanja", + "nym": "Ньямвези", + "nyn": "Ньянколе", + "nyo": "Ньоро", + "nzi": "Нзима", + "oci": "Occitan (post 1500)", + "oji": "Оджибва", + "ori": "Ория", + "orm": "Оромо", + "osa": "Оседжи", + "oss": "Ossetian", + "ota": "Турецкий; Отомангский (1500-1928)", + "pag": "Пангасинан", + "pal": "Пехлевийский", + "pam": "Пампанга", + "pan": "Panjabi", + "pap": "Папьяменто", + "pau": "Палау", + "peo": "Persian; Old (ca. 600-400 B.C.)", + "phn": "Финикийский", + "pli": "Пали", + "pol": "Польский", + "pon": "Фонпейский", + "por": "Португальский", + "pro": "Старопровансальский (по 1500)", + "pus": "Пушту", + "que": "Кечуа", + "raj": "Раджастхани", + "rap": "Рапаню", + "rar": "Maori; Cook Islands", + "roh": "Romansh", + "rom": "Цыганский", + "ron": "Румынский", + "run": "Рунди", + "rup": "Romanian; Macedo-", + "rus": "Русский", + "sad": "Сандаве", + "sag": "Санго", + "sah": "Якутский", + "sam": "Aramaic; Samaritan", + "san": "Санскрит", + "sas": "Сасакский", + "sat": "Сантали", + "scn": "Сицилийский", + "sco": "Шотландский", + "sel": "Селкапский", + "sga": "Староирландский (по 900)", + "shn": "Шанский", + "sid": "Сидама", + "sin": "Сингальский", + "slk": "Словацкий", + "slv": "Словенский", + "sma": "Sami; Southern", + "sme": "Sami; Northern", + "smj": "Люле-саамский", + "smn": "Sami; Inari", + "smo": "Самоанский", + "sms": "Sami; Skolt", + "sna": "Шона", + "snd": "Синдхи", + "snk": "Сонинк", + "sog": "Согдийский", + "som": "Сомали", + "sot": "Сото Южный", + "spa": "Испанский", + "sqi": "Албанский", + "srd": "Сардинский", + "srn": "Sranan Tongo", + "srp": "Сербский", + "srr": "Серер", + "ssw": "Свати", + "suk": "Сукума", + "sun": "Сунданский", + "sus": "Сусу", + "sux": "Шумерский", + "swa": "Swahili (macrolanguage)", + "swe": "Шведский", + "syc": "Syriac; Classical", + "syr": "Сирийский", + "tah": "Таитянский", + "tam": "Тамильский", + "tat": "Татарский", + "tel": "Телугу", + "tem": "Темне", + "ter": "Терено", + "tet": "Тетумский", + "tgk": "Таджикский", + "tgl": "Тагалог", + "tha": "Таи", + "tig": "Тигре", + "tir": "Тигринья", + "tiv": "Тив", + "tkl": "Токелау", + "tlh": "Klingon", + "tli": "Тлингит", + "tmh": "Тамашек", + "tog": "Тонга (Ньяса)", + "ton": "Тонга (острова Тонга)", + "tpi": "Ток Писин", + "tsi": "Цимшиан", + "tsn": "Тсвана", + "tso": "Тсонга", + "tuk": "Туркменский", + "tum": "Тумбука", + "tur": "Турецкий", + "tvl": "Тувалу", + "twi": "Тви", + "tyv": "Тувинский", + "udm": "Удмуртский", + "uga": "Угаритский", + "uig": "Уйгурский", + "ukr": "Украинский", + "umb": "Умбунду", + "und": "Неидентифицированный", + "urd": "Урду", + "uzb": "Узбекский", + "vai": "Ваи", + "ven": "Венда", + "vie": "Вьетнамский", + "vol": "Волапюк", + "vot": "Вотик", + "wal": "Wolaytta", + "war": "Waray (Philippines)", + "was": "Вашо", + "wln": "Валлун", + "wol": "Волоф", + "xal": "Kalmyk", + "xho": "Коса", + "yao": "Яо", + "yap": "Яапийский", + "yid": "Идиш", + "yor": "Йоруба", + "zap": "Сапотекский", + "zbl": "Blissymbols", + "zen": "Зенагский", + "zha": "Чжуанский", + "zho": "Китайский", + "zul": "Зулусский", + "zun": "Зуньи", + "zxx": "Нет языкового содержимого", + "zza": "Зазаки" + }, "sv": { "aar": "Afar", "abk": "Abchaziska", @@ -5847,845 +5847,1218 @@ LANGUAGE_NAMES = { "zxx": "No linguistic content", "zza": "Zaza" }, - "cs": { - "aar": "afarština", - "abk": "abchazajština", - "ace": "atěžština", - "ach": "ačoli (luoština)", - "ada": "adangmeština", - "ady": "adyghe", - "afh": "afrihilijština", - "afr": "afrikánština", - "ain": "ainu (Japonsko)", - "aka": "akanština", - "akk": "akkadština", - "ale": "aleutština", - "alt": "altajština; jižní", - "amh": "Amharština", - "ang": "Angličtina; stará (asi 450-1100)", - "anp": "angika", - "ara": "arabština", - "arc": "aramejština; oficiální (700-300 př. n. l.)", - "arg": "aragonská španělština", - "arn": "mapudungun", - "arp": "arapaho", - "arw": "arawacké jazyky", - "asm": "ásámština", - "ast": "asturština", - "ava": "avarština", - "ave": "avestština", - "awa": "avadhština (avadhí)", - "aym": "aymarština", - "aze": "azerbajdžánština", - "bak": "Baskirština", - "bal": "balúčština", - "bam": "bambarština", - "ban": "balijština", - "bas": "basa (Kamerun)", - "bej": "bedža", - "bel": "běloruština", - "bem": "bemba (Zambie)", - "ben": "bengálština", - "bho": "bhódžpurština", - "bik": "bikolština", - "bin": "bini", - "bis": "bislamština", - "bla": "siksika", - "bod": "tibetština", - "bos": "bosenština", - "bra": "bradžština", - "bre": "bretonština", - "bua": "burjatština", - "bug": "bugiština", - "bul": "bulharština", - "byn": "bilin", - "cad": "caddo", - "car": "carib; Galibi", - "cat": "katalánština", - "ceb": "cebuánština", - "ces": "Čeština", - "cha": "čamoro", - "chb": "čibča", - "che": "čečenština", - "chg": "Chagatai", - "chk": "čukčtina", - "chm": "Mari (Russia)", - "chn": "činuk pidžin", - "cho": "choctawština", - "chp": "čipeva", - "chr": "čerokézština", - "chu": "Slavonic; Old", - "chv": "čuvaština", - "chy": "čejenština", - "cop": "koptština", - "cor": "kornština", - "cos": "korsičtina", - "cre": "krí", - "crh": "Turkish; Crimean", - "csb": "kašubština", - "cym": "velština", - "dak": "dakotština", - "dan": "dánština", - "dar": "dargwa", - "del": "delawarština", - "den": "atabaské jazyky", - "deu": "Němčina", - "dgr": "dogrib", - "din": "dinkština", - "div": "Dhivehi", - "doi": "Dogri (macrolanguage)", - "dsb": "Sorbian; Lower", - "dua": "dualština", - "dum": "Dutch; Middle (ca. 1050-1350)", - "dyu": "djula", - "dzo": "Bhútánština", - "efi": "efik", - "egy": "egyptština (starověká)", - "eka": "ekajuk", - "ell": "řečtina; moderní (1453-)", - "elx": "elamština", - "eng": "Angličtina", - "enm": "Angličtina; středověká (1100-1500)", - "epo": "esperanto", - "est": "estonština", - "eus": "baskičtina", - "ewe": "eweština", - "ewo": "ewondo", - "fan": "Fang (Equatorial Guinea)", - "fao": "faerština", - "fas": "perština", - "fat": "fantiština", - "fij": "Fidži", - "fil": "Filipino", - "fin": "finština", - "fon": "fonština", - "fra": "francouzština", - "frm": "French; Middle (ca. 1400-1600)", - "fro": "French; Old (842-ca. 1400)", - "frr": "Frisian; Northern", - "frs": "Frisian; Eastern", - "fry": "Frisian; Western", - "ful": "fulahština", - "fur": "furlanština", - "gaa": "ga", - "gay": "gayo", - "gba": "Gbaya (Central African Republic)", - "gez": "etiopština", - "gil": "kiribatština", - "gla": "Gaelic; Scottish", - "gle": "irština", - "glg": "Galician", - "glv": "manština", - "gmh": "German; Middle High (ca. 1050-1500)", - "goh": "German; Old High (ca. 750-1050)", - "gon": "góndština", - "gor": "gorontalo", - "got": "gótština", - "grb": "grebo", - "grc": "řečtina; starověká (do 1453)", - "grn": "Guaranština", - "gsw": "German; Swiss", - "guj": "Gudžarátština", - "gwi": "Gwichʼin", - "hai": "haida", - "hat": "Creole; Haitian", - "hau": "Hausa", - "haw": "havajština", - "heb": "hebrejština", - "her": "herero", - "hil": "hiligayonština", - "hin": "hindština", - "hit": "chetitština", - "hmn": "hmongština", - "hmo": "hiri motu", - "hrv": "chorvatština", - "hsb": "Sorbian; Upper", - "hun": "maďarština", - "hup": "hupa", - "hye": "arménština", - "iba": "iban", - "ibo": "igbo", - "ido": "ido", - "iii": "Yi; Sichuan", - "iku": "Inuktitutština", - "ile": "Interlingue", - "ilo": "ilokánština", - "ina": "Interlingua (Mezinárodní pomocná jazyková asociace)", - "ind": "indonézština", - "inh": "inguština", - "ipk": "Inupiakština", - "isl": "islandština", - "ita": "italština", - "jav": "jávština", - "jbo": "lojban", - "jpn": "japonština", - "jpr": "judeo-perština", - "jrb": "judeo-arabština", - "kaa": "karakalpačtina", - "kab": "kabulí", - "kac": "kačjinština", - "kal": "Kalaallisut", - "kam": "Kamba (Kenya)", - "kan": "Kannadština", - "kas": "kašmírština", - "kat": "Gruzínština", - "kau": "kanuri", - "kaw": "kawi", - "kaz": "Kazachština", - "kbd": "kabardština", - "kha": "Khasi", - "khm": "Khmer; Central", - "kho": "chotánština", - "kik": "Kikuyu", - "kin": "Kinyarwandština", - "kir": "Kirgizština", - "kmb": "kimbundština", - "kok": "Konkani (macrolanguage)", - "kom": "komijština", - "kon": "Kongo", - "kor": "korejština", - "kos": "kosrajština", - "kpe": "kpelle", - "krc": "karachay-balkarština", - "krl": "karelština", - "kru": "kurukh", - "kua": "Kuanyama", - "kum": "kumyčtina", - "kur": "kurdština", - "kut": "kutenai", - "lad": "ladino", - "lah": "lahnda", - "lam": "lambština", - "lao": "Laoština", - "lat": "latina", - "lav": "Latvian", - "lez": "lezgiština", - "lim": "Limburgan", - "lin": "Ngalština", - "lit": "litevština", - "lol": "mongština", - "loz": "lozština", - "ltz": "Luxembourgish", - "lua": "luba-luluaština", - "lub": "lubu-katanžština", - "lug": "Ganda", - "lui": "luiseňo", - "lun": "lundština", - "luo": "luoština (Keňa a Tanzanie)", - "lus": "lušáí", - "mad": "madurština", - "mag": "magahština", - "mah": "Marshallese", - "mai": "maithilština", - "mak": "makasarština", - "mal": "Malabarština", - "man": "mandingština", - "mar": "maráthština", - "mas": "masajština", - "mdf": "moksha", - "mdr": "mandarínština", - "men": "Mende (Sierra Leone)", - "mga": "irština; středověká (900-1200)", - "mic": "Mi'kmaq", - "min": "minangkabau", - "mis": "Uncoded languages", - "mkd": "makedonština", - "mlg": "Malgaština", - "mlt": "maltézština", - "mnc": "manchu", - "mni": "manipurština", - "moh": "mohawk", - "mon": "mongolština", - "mos": "mosi", - "mri": "maorština", - "msa": "Malay (macrolanguage)", - "mul": "násobné jazyky", - "mus": "krík", - "mwl": "mirandština", - "mwr": "márvárština", - "mya": "Barmština", - "myv": "erzya", - "nap": "neapolština", - "nau": "naurština", - "nav": "navažština", - "nbl": "Ndebele; South", - "nde": "Ndebele; North", - "ndo": "ndondština", - "nds": "German; Low", - "nep": "nepálština", - "new": "Bhasa; Nepal", - "nia": "nias", - "niu": "niue", - "nld": "holandština", - "nno": "Norwegian Nynorsk", - "nob": "Norwegian Bokmål", - "nog": "nogai", - "non": "norština; stará", - "nor": "norština", - "nqo": "N'Ko", - "nso": "Sotho; Northern", - "nwc": "Newari; Old", - "nya": "Nyanja", - "nym": "ňamwežština", - "nyn": "nyankolština", - "nyo": "Nyoro", - "nzi": "nzima", - "oci": "Occitan (post 1500)", - "oji": "Ojibwa", - "ori": "orija", - "orm": "Oromo (Afan)", - "osa": "osagština", - "oss": "Ossetian", - "ota": "turečtina; osmanská (1500-1928)", - "pag": "pangsinan", - "pal": "pahlaví", - "pam": "pampangau", - "pan": "Panjabi", - "pap": "papiamento", - "pau": "palauština", - "peo": "Persian; Old (ca. 600-400 B.C.)", - "phn": "Slovinština", - "pli": "páli", - "pol": "Polština", - "pon": "pohnpeiština", - "por": "portugalština", - "pro": "provensálština; stará (do 1500)", - "pus": "pašto", - "que": "kečuánština", - "raj": "rádžasthánština", - "rap": "rapanuiština", - "rar": "Maori; Cook Islands", - "roh": "Romansh", - "rom": "římština", - "ron": "rumunština", - "run": "Kirundi", - "rup": "Romanian; Macedo-", - "rus": "Ruština", - "sad": "sandawština", - "sag": "sangoština", - "sah": "jakutština", - "sam": "Aramaic; Samaritan", - "san": "sanskrt", - "sas": "sačtina", - "sat": "santálí", - "scn": "sicilština", - "sco": "skotština", - "sel": "selkupština", - "sga": "irština; stará (do 900)", - "shn": "šanština", - "sid": "sidamo", - "sin": "Sinhálština", - "slk": "Slovenština", - "slv": "slovinština", - "sma": "Sami; Southern", - "sme": "Sami; Northern", - "smj": "lule sami", - "smn": "Sami; Inari", - "smo": "Samoyština", - "sms": "Sami; Skolt", - "sna": "šonština", - "snd": "sindhština", - "snk": "sonikština", - "sog": "sogdijština", - "som": "somálština", - "sot": "sotština; jižní", - "spa": "španělština", - "sqi": "albánština", - "srd": "sardinština", - "srn": "Sranan Tongo", - "srp": "srbština", - "srr": "Serer", - "ssw": "Siswatština", - "suk": "sukuma", - "sun": "Sundanština", - "sus": "susu", - "sux": "sumerština", - "swa": "svahilština (makrojazyk)", - "swe": "švédština", - "syc": "Syriac; Classical", - "syr": "syrština", - "tah": "tahitština", - "tam": "Tamilština", - "tat": "tatarština", - "tel": "Telugu", - "tem": "temne", - "ter": "tereno", - "tet": "tetumština", - "tgk": "Tádžičtina", - "tgl": "Tagalog", - "tha": "thajština", - "tig": "tigrejština", - "tir": "Tigrinijština", - "tiv": "tivština", - "tkl": "tokelauština", - "tlh": "Klingon", - "tli": "tlingit", - "tmh": "Tamashek", - "tog": "tongština (nyasa)", - "ton": "Tonga", - "tpi": "tok pisin", - "tsi": "tsimshijské jazyky", - "tsn": "Setswanština", - "tso": "Tsonga", - "tuk": "turkmenistánština", - "tum": "tumbukština", - "tur": "turečtina", - "tvl": "tuvalština", - "twi": "ťwiština", - "tyv": "tuvština", - "udm": "udmurtština", - "uga": "ugaritština", - "uig": "Uighurština", - "ukr": "ukrajinština", - "umb": "umbundu", - "und": "neurčitý", - "urd": "urdština", - "uzb": "uzbekistánština", - "vai": "vai", - "ven": "vendština", - "vie": "vietnamština", - "vol": "volapük", - "vot": "votiatština", - "wal": "Wolaytta", - "war": "Waray (Philippines)", - "was": "washo", - "wln": "valonština", - "wol": "volofština", - "xal": "Kalmyk", - "xho": "xhoština", - "yao": "jaoština", - "yap": "japština", - "yid": "Jidiš", - "yor": "jorubština", - "zap": "Zapotec", - "zbl": "Blissymbols", - "zen": "zenaga", - "zha": "Zhuang", - "zho": "čínština", - "zul": "Zulu", - "zun": "zunijština", - "zxx": "bez lingvistického obsahu", - "zza": "zaza" - }, - "es": { - "aar": "Afar", - "abk": "Abkhazian", + "tr": { + "abk": "Abhazca", "ace": "Achinese", "ach": "Acoli", "ada": "Adangme", "ady": "Adyghe", + "aar": "Afar", "afh": "Afrihili", - "afr": "Afrikaans", - "ain": "Ainu (Japan)", - "aka": "Akan", - "akk": "Akkadian", - "ale": "Aleut", - "alt": "Altai; Southern", - "amh": "Amharic", - "ang": "English; Old (ca. 450-1100)", + "afr": "Afrikanca", + "ain": "Ainu (Japonca)", + "aka": "Akanca (Afrika dili)", + "akk": "Akatça", + "sqi": "Albanian", + "ale": "Alaskaca", + "amh": "Etiyopyaca", "anp": "Angika", - "ara": "Arabic", - "arc": "Aramaic; Official (700-300 BCE)", - "arg": "Aragonese", - "arn": "Mapudungun", - "arp": "Arapaho", - "arw": "Arawak", - "asm": "Assamese", - "ast": "Asturian", - "ava": "Avaric", - "ave": "Avestan", - "awa": "Awadhi", - "aym": "Aymara", - "aze": "Azerbaijani", - "bak": "Bashkir", - "bal": "Baluchi", - "bam": "Bambara", - "ban": "Balinese", - "bas": "Basa (Cameroon)", - "bej": "Beja", - "bel": "Belarusian", + "ara": "Arapça", + "arg": "Aragonca (İspanya)", + "arp": "Arapaho (Kuzey Amerika yerlileri)", + "arw": "Arawak (Surinam)", + "hye": "Ermenice", + "asm": "Assamese (Hindistan)", + "ast": "Asturyasca", + "ava": "Avarca", + "ave": "Avestan (Eski İran)", + "awa": "Awadhi (Hindistan)", + "aym": "Aymara (Güney Amerika)", + "aze": "Azerice", + "ban": "Balice (Bali adaları)", + "bal": "Belucice (İran)", + "bam": "Bambara (Mali)", + "bas": "Basa (Kamerun)", + "bak": "Başkırca", + "eus": "Baskça", + "bej": "Beja (Eritre; Sudan)", + "bel": "Beyaz Rusça", "bem": "Bemba (Zambia)", - "ben": "Bengali", - "bho": "Bhojpuri", - "bik": "Bikol", - "bin": "Bini", - "bis": "Bislama", - "bla": "Siksika", - "bod": "Tibetan", - "bos": "Bosnian", - "bra": "Braj", - "bre": "Breton", - "bua": "Buriat", - "bug": "Buginese", - "bul": "Bulgarian", + "ben": "Bengalce", + "bho": "Bhojpuri (Hindistan)", + "bik": "Bikol (Filipinler)", "byn": "Bilin", - "cad": "Caddo", - "car": "Carib; Galibi", - "cat": "Catalan", - "ceb": "Cebuano", - "ces": "Czech", - "cha": "Chamorro", - "chb": "Chibcha", - "che": "Chechen", - "chg": "Chagatai", + "bin": "Bini (Afrika)", + "bis": "Bislama (Vanuatu; Kuzey Pasifik)", + "zbl": "Blis Sembolleri", + "bos": "Boşnakça", + "bra": "Braj (Hindistan)", + "bre": "Bretonca", + "bug": "Buginese (Endonezya)", + "bul": "Bulgarca", + "bua": "Buriat (Moğolistan)", + "mya": "Burmaca", + "cad": "Caddo (Kuzey Amerika yerlileri)", + "cat": "Katalanca", + "ceb": "Cebuano (Filipinler)", + "chg": "Çağatayca", + "cha": "Chamorro (Guam adaları)", + "che": "Çeçence", + "chr": "Cherokee (Kuzey Amerika yerlileri)", + "chy": "Cheyenne (kuzey Amerika yerlileri)", + "chb": "Chibcha (Kolombiya)", + "zho": "Çince", + "chn": "Chinook lehçesi (Kuzey Batı Amerika kıyıları)", + "chp": "Chipewyan (Kuzey Amerika yerlileri)", + "cho": "Choctaw (Kuzey Amerika yerlileri)", "chk": "Chuukese", - "chm": "Mari (Russia)", - "chn": "Chinook jargon", - "cho": "Choctaw", - "chp": "Chipewyan", - "chr": "Cherokee", - "chu": "Slavonic; Old", - "chv": "Chuvash", - "chy": "Cheyenne", - "cop": "Coptic", - "cor": "Cornish", - "cos": "Corsican", - "cre": "Cree", - "crh": "Turkish; Crimean", - "csb": "Kashubian", - "cym": "Welsh", - "dak": "Dakota", - "dan": "Danish", - "dar": "Dargwa", - "del": "Delaware", - "den": "Slave (Athapascan)", - "deu": "German", - "dgr": "Dogrib", - "din": "Dinka", + "chv": "Çuvaş (Türkçe)", + "cop": "Kıptice (Eski Mısır)", + "cor": "Cornish (Kelt)", + "cos": "Korsikaca", + "cre": "Cree (Kuzey Amerika yerlileri)", + "mus": "Creek", + "hrv": "Hırvatça", + "ces": "Çekçe", + "dak": "Dakota (Kuzey Amerika yerlileri)", + "dan": "Danimarkaca; Danca", + "dar": "Dargwa (Dağıstan)", + "del": "Delaware (Kuzey Amerika yerlileri)", "div": "Dhivehi", - "doi": "Dogri (macrolanguage)", - "dsb": "Sorbian; Lower", - "dua": "Duala", - "dum": "Dutch; Middle (ca. 1050-1350)", - "dyu": "Dyula", - "dzo": "Dzongkha", - "efi": "Efik", - "egy": "Egyptian (Ancient)", - "eka": "Ekajuk", - "ell": "Greek; Modern (1453-)", - "elx": "Elamite", - "eng": "English", - "enm": "English; Middle (1100-1500)", + "din": "Dinka (Sudan)", + "doi": "Dogri (makro dili)", + "dgr": "Dogrib (Kanada)", + "dua": "Duala (Afrika)", + "nld": "Flâmanca (Hollanda dili)", + "dyu": "Dyula (Burkina Faso; Mali)", + "dzo": "Dzongkha (Butan)", + "efi": "Efik (Afrika)", + "egy": "Mısırca (Eski)", + "eka": "Ekajuk (Afrika)", + "elx": "Elamca", + "eng": "İngilizce", + "myv": "Erzya dili", "epo": "Esperanto", - "est": "Estonian", - "eus": "Basque", - "ewe": "Ewe", - "ewo": "Ewondo", - "fan": "Fang (Equatorial Guinea)", - "fao": "Faroese", - "fas": "Persian", - "fat": "Fanti", - "fij": "Fijian", - "fil": "Filipino", - "fin": "Finnish", - "fon": "Fon", - "fra": "French", - "frm": "French; Middle (ca. 1400-1600)", - "fro": "French; Old (842-ca. 1400)", - "frr": "Frisian; Northern", - "frs": "Frisian; Eastern", - "fry": "Frisian; Western", - "ful": "Fulah", - "fur": "Friulian", - "gaa": "Ga", - "gay": "Gayo", - "gba": "Gbaya (Central African Republic)", - "gez": "Geez", - "gil": "Gilbertese", - "gla": "Gaelic; Scottish", - "gle": "Irish", - "glg": "Galician", - "glv": "Manx", - "gmh": "German; Middle High (ca. 1050-1500)", - "goh": "German; Old High (ca. 750-1050)", - "gon": "Gondi", - "gor": "Gorontalo", - "got": "Gothic", - "grb": "Grebo", - "grc": "Greek; Ancient (to 1453)", - "grn": "Guarani", - "gsw": "German; Swiss", - "guj": "Gujarati", + "est": "Estonca", + "ewe": "Ewe (Afrika)", + "ewo": "Ewondo (Afrika)", + "fan": "Fang (Ekvatoryal Guinea)", + "fat": "Fanti (Afrika)", + "fao": "Faroece", + "fij": "Fiji dili", + "fil": "Filipince", + "fin": "Fince", + "fon": "Fon (Benin)", + "fra": "Fransızca", + "fur": "Friulian (İtalya)", + "ful": "Fulah (Afrika)", + "gaa": "Ganaca", + "glg": "Galce", + "lug": "Ganda Dili", + "gay": "Gayo (Sumatra)", + "gba": "Gbaya (Orta Afrika Cumhuriyeti)", + "gez": "Geez (Etiyopya)", + "kat": "Gürcüce", + "deu": "Almanca", + "gil": "Kiribati dili", + "gon": "Gondi (Hindistan)", + "gor": "Gorontalo (Endonezya)", + "got": "Gotik", + "grb": "Grebo (Liberya)", + "grn": "Guarani (Paraguay)", + "guj": "Gucaratça", "gwi": "Gwichʼin", - "hai": "Haida", - "hat": "Creole; Haitian", - "hau": "Hausa", - "haw": "Hawaiian", - "heb": "Hebrew", - "her": "Herero", + "hai": "Haida (Kuzey Amerika yerlileri)", + "hau": "Hausa Dili", + "haw": "Havai Dili", + "heb": "İbranice", + "her": "Herero Dili", "hil": "Hiligaynon", - "hin": "Hindi", - "hit": "Hittite", - "hmn": "Hmong", + "hin": "Hintçe", "hmo": "Hiri Motu", - "hrv": "Croatian", - "hsb": "Sorbian; Upper", - "hun": "Hungarian", + "hit": "Hititçe", + "hmn": "Hmong", + "hun": "Macarca", "hup": "Hupa", - "hye": "Armenian", "iba": "Iban", - "ibo": "Igbo", - "ido": "Ido", - "iii": "Yi; Sichuan", - "iku": "Inuktitut", - "ile": "Interlingue", + "isl": "İzlandaca", + "ido": "Ido Dili", + "ibo": "Igbo Dili", "ilo": "Iloko", - "ina": "Interlingua (International Auxiliary Language Association)", - "ind": "Indonesian", - "inh": "Ingush", - "ipk": "Inupiaq", - "isl": "Icelandic", - "ita": "Italian", - "jav": "Javanese", - "jbo": "Lojban", - "jpn": "Japanese", - "jpr": "Judeo-Persian", - "jrb": "Judeo-Arabic", - "kaa": "Kara-Kalpak", + "ind": "Endonezyaca", + "inh": "İnguşca", + "ina": "Interlingua (Uluslararası Yardımcı Dil Kurumu)", + "ile": "Interlingue", + "iku": "Inuktitut", + "ipk": "Inupiak Dili", + "gle": "İrlandaca", + "ita": "İtalyanca", + "jpn": "Japonca", + "jav": "Cava Dili", + "jrb": "Yahudi-Arapçası", + "jpr": "Yahudi-Farsça", + "kbd": "Kabardian", "kab": "Kabyle", "kac": "Kachin", "kal": "Kalaallisut", + "xal": "Kalmyk", "kam": "Kamba (Kenya)", "kan": "Kannada", - "kas": "Kashmiri", - "kat": "Georgian", - "kau": "Kanuri", - "kaw": "Kawi", - "kaz": "Kazakh", - "kbd": "Kabardian", - "kha": "Khasi", - "khm": "Khmer; Central", - "kho": "Khotanese", - "kik": "Kikuyu", - "kin": "Kinyarwanda", - "kir": "Kirghiz", - "kmb": "Kimbundu", - "kok": "Konkani (macrolanguage)", - "kom": "Komi", - "kon": "Kongo", - "kor": "Korean", - "kos": "Kosraean", - "kpe": "Kpelle", + "kau": "Kanuri Dili", + "kaa": "Kara-Kalpak", "krc": "Karachay-Balkar", "krl": "Karelian", - "kru": "Kurukh", - "kua": "Kuanyama", + "kas": "Keşmirce", + "csb": "Kashubian (Lehçe diyalekti)", + "kaw": "Kawi", + "kaz": "Kazakça", + "kha": "Khasi", + "kho": "Khotanese", + "kik": "Kikuyu Dili", + "kmb": "Kimbundu", + "kin": "Kinyarwanda", + "kir": "Kırgızca", + "tlh": "Klingon", + "kom": "Komi Dili", + "kon": "Kongo Dili", + "kok": "Konkani (makro dil)", + "kor": "Korece", + "kos": "Kosraean", + "kpe": "Kpelle", + "kua": "Kuanyama Dili", "kum": "Kumyk", - "kur": "Kurdish", + "kur": "Kürtçe", + "kru": "Kurukh", "kut": "Kutenai", "lad": "Ladino", "lah": "Lahnda", "lam": "Lamba", - "lao": "Lao", - "lat": "Latin", - "lav": "Latvian", + "lao": "Laos Dili", + "lat": "Latince", + "lav": "Letonca", "lez": "Lezghian", - "lim": "Limburgan", - "lin": "Lingala", - "lit": "Lithuanian", - "lol": "Mongo", + "lim": "Liburg Dili", + "lin": "Lingala Dili", + "lit": "Litvanyaca", + "jbo": "Lojban dili", "loz": "Lozi", - "ltz": "Luxembourgish", + "lub": "Luba Katanga Dili", "lua": "Luba-Lulua", - "lub": "Luba-Katanga", - "lug": "Ganda", "lui": "Luiseno", + "smj": "Lule Sami", "lun": "Lunda", - "luo": "Luo (Kenya and Tanzania)", + "luo": "Luo (Kenya ve Tanzanya)", "lus": "Lushai", + "ltz": "Lüksemburg Dili", + "mkd": "Makedonca", "mad": "Madurese", "mag": "Magahi", - "mah": "Marshallese", - "mai": "Maithili", + "mai": "Maithili dili", "mak": "Makasar", + "mlg": "Madagaskar Dili", + "msa": "Malay (makro dili)", "mal": "Malayalam", - "man": "Mandingo", - "mar": "Marathi", - "mas": "Masai", - "mdf": "Moksha", - "mdr": "Mandar", - "men": "Mende (Sierra Leone)", - "mga": "Irish; Middle (900-1200)", - "mic": "Mi'kmaq", - "min": "Minangkabau", - "mis": "Uncoded languages", - "mkd": "Macedonian", - "mlg": "Malagasy", - "mlt": "Maltese", + "mlt": "Maltaca", "mnc": "Manchu", - "mni": "Manipuri", - "moh": "Mohawk", - "mon": "Mongolian", - "mos": "Mossi", - "mri": "Maori", - "msa": "Malay (macrolanguage)", - "mul": "Multiple languages", - "mus": "Creek", - "mwl": "Mirandese", + "mdr": "Mandar", + "man": "Mandingo", + "mni": "Manipuri dili", + "glv": "Manx (Galler)", + "mri": "Maori Dili", + "arn": "Mapudungun", + "mar": "Marathi", + "chm": "Mari (Rusya)", + "mah": "Marshall Dili", "mwr": "Marwari", - "mya": "Burmese", - "myv": "Erzya", - "nap": "Neapolitan", + "mas": "Masai", + "men": "Mende (Sierra Leone)", + "mic": "Mi'kmak", + "min": "Minangkabau", + "mwl": "Mirandese", + "moh": "Mohawk", + "mdf": "Moşka", + "lol": "Mongo", + "mon": "Moğol Dili", + "mos": "Mossi", + "mul": "Çoklu diller", + "nqo": "N'Ko", "nau": "Nauru", - "nav": "Navajo", - "nbl": "Ndebele; South", - "nde": "Ndebele; North", - "ndo": "Ndonga", - "nds": "German; Low", - "nep": "Nepali", - "new": "Bhasa; Nepal", + "nav": "Navajo Dili", + "ndo": "Ndonga Dili", + "nap": "Neapolitan", "nia": "Nias", "niu": "Niuean", - "nld": "Dutch", - "nno": "Norwegian Nynorsk", - "nob": "Norwegian Bokmål", + "zxx": "Hiçbir dil içeriği yok", "nog": "Nogai", - "non": "Norse; Old", - "nor": "Norwegian", - "nqo": "N'Ko", - "nso": "Sotho; Northern", - "nwc": "Newari; Old", - "nya": "Nyanja", + "nor": "Norveçce", + "nob": "Norveççe Bokmal", + "nno": "Norveççe Nynorsk", "nym": "Nyamwezi", + "nya": "Nyanja", "nyn": "Nyankole", "nyo": "Nyoro", "nzi": "Nzima", - "oci": "Occitan (post 1500)", - "oji": "Ojibwa", - "ori": "Oriya", - "orm": "Oromo", + "oci": "Oksitanca (1500 sonrası)", + "oji": "Ojibwa Dili", + "orm": "Oromo Dili", "osa": "Osage", - "oss": "Ossetian", - "ota": "Turkish; Ottoman (1500-1928)", - "pag": "Pangasinan", - "pal": "Pahlavi", - "pam": "Pampanga", - "pan": "Panjabi", - "pap": "Papiamento", + "oss": "Osetya Dili", + "pal": "Pehlevi", "pau": "Palauan", - "peo": "Persian; Old (ca. 600-400 B.C.)", - "phn": "Phoenician", - "pli": "Pali", - "pol": "Polish", + "pli": "Pali Dili", + "pam": "Pampanga", + "pag": "Pangasinan", + "pan": "Pencabi Dili", + "pap": "Papiamento", + "fas": "Farsça", + "phn": "Fenikçe", "pon": "Pohnpeian", - "por": "Portuguese", - "pro": "Provençal; Old (to 1500)", - "pus": "Pashto", + "pol": "Polonyaca", + "por": "Portekizce", + "pus": "Pushto", "que": "Quechua", "raj": "Rajasthani", "rap": "Rapanui", - "rar": "Maori; Cook Islands", - "roh": "Romansh", - "rom": "Romany", - "ron": "Romanian", - "run": "Rundi", - "rup": "Romanian; Macedo-", - "rus": "Russian", + "ron": "Rumence", + "roh": "Romanca", + "rom": "Çingene Dili", + "run": "Kirundi", + "rus": "Rusça", + "smo": "Samoa Dili", "sad": "Sandawe", - "sag": "Sango", - "sah": "Yakut", - "sam": "Aramaic; Samaritan", - "san": "Sanskrit", + "sag": "Sangho", + "san": "Sanskritçe", + "sat": "Santali dili", + "srd": "Sardinya", "sas": "Sasak", - "sat": "Santali", - "scn": "Sicilian", - "sco": "Scots", + "sco": "İskoç lehçesi", "sel": "Selkup", - "sga": "Irish; Old (to 900)", - "shn": "Shan", - "sid": "Sidamo", - "sin": "Sinhala", - "slk": "Slovak", - "slv": "Slovenian", - "sma": "Sami; Southern", - "sme": "Sami; Northern", - "smj": "Lule Sami", - "smn": "Sami; Inari", - "smo": "Samoan", - "sms": "Sami; Skolt", - "sna": "Shona", - "snd": "Sindhi", - "snk": "Soninke", - "sog": "Sogdian", - "som": "Somali", - "sot": "Sotho; Southern", - "spa": "Spanish", - "sqi": "Albanian", - "srd": "Sardinian", - "srn": "Sranan Tongo", - "srp": "Serbian", + "srp": "Sırpça", "srr": "Serer", - "ssw": "Swati", + "shn": "Shan", + "sna": "Shona", + "scn": "Sicilyalı", + "sid": "Sidamo", + "bla": "Siksika (Kuzey Amerika yerlileri)", + "snd": "Sindhi", + "sin": "Sinhala Dili", + "den": "Slave (Athapascan; Kuzey Amerika yerlileri)", + "slk": "Slovakça", + "slv": "Slovence", + "sog": "Sogdian", + "som": "Somali Dili", + "snk": "Soninke", + "spa": "İspanyolca", + "srn": "Sranan Tongo", "suk": "Sukuma", - "sun": "Sundanese", + "sux": "Sümerce", + "sun": "Sudan Dili", "sus": "Susu", - "sux": "Sumerian", - "swa": "Swahili (macrolanguage)", - "swe": "Swedish", - "syc": "Syriac; Classical", - "syr": "Syriac", - "tah": "Tahitian", - "tam": "Tamil", - "tat": "Tatar", + "swa": "Swahili (makro dil)", + "ssw": "Siswati", + "swe": "İsveçce", + "syr": "Süryanice", + "tgl": "Tagalog", + "tah": "Tahitice", + "tgk": "Tacikçe", + "tmh": "Tamashek", + "tam": "Tamilce", + "tat": "Tatarca", "tel": "Telugu", - "tem": "Timne", "ter": "Tereno", "tet": "Tetum", - "tgk": "Tajik", - "tgl": "Tagalog", - "tha": "Thai", + "tha": "Taylandça", + "bod": "Tibetçe", "tig": "Tigre", - "tir": "Tigrinya", + "tir": "Tigrinya Dili", + "tem": "Timne", "tiv": "Tiv", - "tkl": "Tokelau", - "tlh": "Klingon", "tli": "Tlingit", - "tmh": "Tamashek", - "tog": "Tonga (Nyasa)", - "ton": "Tonga (Tonga Islands)", "tpi": "Tok Pisin", + "tkl": "Tokelau", + "tog": "Tonga (Nyasa)", + "ton": "Tonga", "tsi": "Tsimshian", - "tsn": "Tswana", "tso": "Tsonga", - "tuk": "Turkmen", + "tsn": "Setswana", "tum": "Tumbuka", - "tur": "Turkish", + "tur": "Türkçe", + "tuk": "Türkmence", "tvl": "Tuvalu", - "twi": "Twi", "tyv": "Tuvinian", + "twi": "Twi", "udm": "Udmurt", - "uga": "Ugaritic", - "uig": "Uighur", - "ukr": "Ukrainian", + "uga": "Ugarit Çivi Yazısı", + "uig": "Uygurca", + "ukr": "Ukraynaca", "umb": "Umbundu", - "und": "Undetermined", - "urd": "Urdu", - "uzb": "Uzbek", + "mis": "Şifresiz diller", + "und": "Belirlenemeyen", + "urd": "Urduca", + "uzb": "Özbekçe", "vai": "Vai", - "ven": "Venda", - "vie": "Vietnamese", + "ven": "Venda Dili", + "vie": "Vietnamca", "vol": "Volapük", "vot": "Votic", + "wln": "Valonca", + "war": "Waray (Filipinler)", + "was": "Vasho", + "cym": "Gal Dili", "wal": "Wolaytta", - "war": "Waray (Philippines)", - "was": "Washo", - "wln": "Walloon", "wol": "Wolof", - "xal": "Kalmyk", "xho": "Xhosa", + "sah": "Yakut", "yao": "Yao", "yap": "Yapese", - "yid": "Yiddish", + "yid": "Yidiş", "yor": "Yoruba", "zap": "Zapotec", - "zbl": "Blissymbols", + "zza": "Zaza", "zen": "Zenaga", - "zha": "Zhuang", - "zho": "Chinese", + "zha": "Zuang Dili", "zul": "Zulu", - "zun": "Zuni", + "zun": "Zuni" + }, + "uk": { + "aar": "афар", + "abk": "абхазька", + "ace": "ачеська", + "ach": "ачолі", + "ada": "адангме", + "ady": "адигейська", + "afh": "афрингілі", + "afr": "африкаанс", + "ain": "айнська (Японія)", + "aka": "акан", + "akk": "аккадська", + "ale": "алеутська", + "alt": "алтайська (південна)", + "amh": "амхарська", + "ang": "давньоанглійська (бл. 450-1100)", + "anp": "ангіка", + "ara": "арабська", + "arc": "арамейська (офіційна; 700-300 до нашої ери)", + "arg": "арагонська", + "arn": "арауканська", + "arp": "арапахо", + "arw": "аравакська", + "asm": "ассамська", + "ast": "астурійська", + "ava": "аварська", + "ave": "авестанська", + "awa": "авадхі", + "aym": "аймарська", + "aze": "азербайджанська", + "bak": "башкирська", + "bal": "белуджійська", + "bam": "бамбара", + "ban": "балійська", + "bas": "баса (Камерун)", + "bej": "бежа", + "bel": "білоруська", + "bem": "бемба (Замбія)", + "ben": "бенгальська", + "bho": "бходжпурі", + "bik": "бікольська", + "bin": "біні", + "bis": "біслама", + "bla": "сісіка", + "bod": "тибетська", + "bos": "боснійська", + "bra": "брай", + "bre": "бретонська", + "bua": "бурятська", + "bug": "бугійська", + "bul": "болгарська", + "byn": "білін", + "cad": "каддо", + "car": "карибська (галібі)", + "cat": "каталонська", + "ceb": "себуано", + "ces": "чеська", + "cha": "чаморо", + "chb": "чибча", + "che": "чеченська", + "chg": "чагатайська", + "chk": "чуукська", + "chm": "марійська (Росія)", + "chn": "чинук; жаргон", + "cho": "чоктау", + "chp": "чипев’ян", + "chr": "черокі", + "chu": "давньослов’янська", + "chv": "чуваська", + "chy": "шаєнн", + "cop": "коптська", + "cor": "корнійська", + "cos": "корсиканська", + "cre": "крі", + "crh": "турецька (кримська)", + "csb": "кашубська", + "cym": "валійська", + "dak": "дакота", + "dan": "данська", + "dar": "даргва", + "del": "делаварська", + "den": "слейві (атабаська)", + "deu": "німецька", + "dgr": "догріб", + "din": "дінка", + "div": "мальдивська", + "doi": "догрі (макромова)", + "dsb": "нижньолужицька", + "dua": "дуала", + "dum": "середньовічна голландська (бл. 1050-1350)", + "dyu": "діула", + "dzo": "дзонг-ке", + "efi": "ефік", + "egy": "давньоєгипетська", + "eka": "екаджук", + "ell": "грецька (з 1453)", + "elx": "еламська", + "eng": "англійська", + "enm": "середньоанглійська (1100-1500)", + "epo": "есперанто", + "est": "естонська", + "eus": "баскська", + "ewe": "еве", + "ewo": "евондо", + "fan": "фанг (Екваторіальна Гвінея)", + "fao": "фарерська", + "fas": "перська", + "fat": "фанті", + "fij": "фіджійська", + "fil": "філіппінська", + "fin": "фінська", + "fon": "фон", + "fra": "французька", + "frm": "середньофранцузька (бл. 1400-1600)", + "fro": "давньофранцузька (842-бл. 1400)", + "frr": "фризька (північна)", + "frs": "фризька (східна)", + "fry": "фризька (західна)", + "ful": "фулах", + "fur": "фріульська", + "gaa": "га", + "gay": "гайо", + "gba": "гбая (Центральноафриканська Республіка)", + "gez": "гііз", + "gil": "гільбертська", + "gla": "гаельська (Шотландія)", + "gle": "ірландська", + "glg": "галісійська", + "glv": "манкс", + "gmh": "середньоверхньонімецька (бл. 1050-1500)", + "goh": "давньосередньонімецька (бл. 750-1050)", + "gon": "гонді", + "gor": "горонтало", + "got": "готська", + "grb": "гребо", + "grc": "давньогрецька (до 1453)", + "grn": "гуарані", + "gsw": "німецька (Швейцарія)", + "guj": "гуджараті", + "gwi": "гвічин", + "hai": "хайда", + "hat": "креольська (гаїтянська)", + "hau": "хауса", + "haw": "гавайська", + "heb": "іврит", + "her": "гереро", + "hil": "хілігайнон", + "hin": "хінді", + "hit": "хетська", + "hmn": "хмонг", + "hmo": "хірімоту", + "hrv": "хорватська", + "hsb": "верхньолужицька", + "hun": "угорська", + "hup": "хупа", + "hye": "вірменська", + "iba": "ібанська", + "ibo": "ігбо", + "ido": "ідо", + "iii": "ї (Сичуань)", + "iku": "інуктітут", + "ile": "окциденталь", + "ilo": "ілоко", + "ina": "інтерлінгва (Асоціація міжнародної допоміжної мови)", + "ind": "індонезійська", + "inh": "інгушська", + "ipk": "інупіак", + "isl": "ісландська", + "ita": "італійська", + "jav": "яванська", + "jbo": "ложбан", + "jpn": "японська", + "jpr": "єврейсько-перська", + "jrb": "єврейсько-арабська", + "kaa": "каракалпацька", + "kab": "кабильська", + "kac": "качин", + "kal": "калаалісут", + "kam": "камба (Кенія)", + "kan": "каннада", + "kas": "кашмірська", + "kat": "грузинська", + "kau": "канурі", + "kaw": "каві", + "kaz": "казахська", + "kbd": "кабардінська", + "kha": "кхасі", + "khm": "кхмерська (центральна)", + "kho": "хотаносакська", + "kik": "кікуйю", + "kin": "кіньяруанда", + "kir": "киргизька", + "kmb": "кімбунду", + "kok": "конкані (макромова)", + "kom": "комі", + "kon": "конго", + "kor": "корейська", + "kos": "косрейська", + "kpe": "кпелле", + "krc": "карачаєво-балкарська", + "krl": "карельська", + "kru": "курух", + "kua": "куаньяма", + "kum": "кумикська", + "kur": "курдська", + "kut": "кутенай", + "lad": "ладіно", + "lah": "лахнда", + "lam": "ламба", + "lao": "лаоська", + "lat": "латинська", + "lav": "латиська", + "lez": "лезгінська", + "lim": "лімбурганська", + "lin": "лінгала", + "lit": "литовська", + "lol": "монго", + "loz": "лозі", + "ltz": "люксембурзька", + "lua": "луба-лулуа", + "lub": "луба-катанга", + "lug": "ганда", + "lui": "луйсеньо", + "lun": "лунда", + "luo": "луо (Кенія і Танзанія)", + "lus": "лушай", + "mad": "мадурська", + "mag": "магахі", + "mah": "маршальська", + "mai": "майтхілі", + "mak": "макасарська", + "mal": "малаялам", + "man": "мандінго", + "mar": "мараті", + "mas": "масаї", + "mdf": "мокшанська", + "mdr": "мандарська", + "men": "менде (Сьєрра-Леоне)", + "mga": "середньоірландська (900-1200)", + "mic": "мікмак", + "min": "мінангкабау", + "mis": "мови без коду", + "mkd": "македонська", + "mlg": "малагасійська", + "mlt": "мальтійська", + "mnc": "манчжурська", + "mni": "маніпурська", + "moh": "мохаук", + "mon": "монгольська", + "mos": "мосі", + "mri": "маорійська", + "msa": "малайська (макромова)", + "mul": "мови; що належать до декількох родин", + "mus": "крікська", + "mwl": "мірандська", + "mwr": "марварі", + "mya": "бірманська", + "myv": "ерзянська", + "nap": "неаполітанська", + "nau": "науру", + "nav": "навахо", + "nbl": "південна ндебеле", + "nde": "північна ндебеле", + "ndo": "ндонга", + "nds": "нижньонімецька", + "nep": "непальська", + "new": "бхаса (Непал)", + "nia": "ніасійська", + "niu": "ніуе", + "nld": "голландська", + "nno": "норвезька нюноршк", + "nob": "норвезька букмол", + "nog": "ногайська", + "non": "давньонорвезька", + "nor": "норвезька", + "nqo": "н’ко", + "nso": "сото; північне", + "nwc": "неварі (давня)", + "nya": "ньянджа", + "nym": "ньямвезі", + "nyn": "ньянколе", + "nyo": "ньоро", + "nzi": "нзіма", + "oci": "оксітанська (після 1500)", + "oji": "оджибва", + "ori": "орія", + "orm": "оромо", + "osa": "оседжі", + "oss": "осетинська", + "ota": "оттоманська турецька (1500-1928)", + "pag": "пангасінан", + "pal": "пехлевійська", + "pam": "пампанга", + "pan": "пенджабі", + "pap": "папьяменто", + "pau": "палау", + "peo": "давньоперська (бл. 600-400 до н.е.)", + "phn": "фінікійська", + "pli": "палі", + "pol": "польська", + "pon": "понапе", + "por": "португальська", + "pro": "провансальська (давня; до 1500 року)", + "pus": "пуштунська", + "que": "кечуа", + "raj": "раджастхані", + "rap": "рапануї", + "rar": "маорійська (острови Кука)", + "roh": "ретророманська", + "rom": "ромська", + "ron": "румунська", + "run": "рунді", + "rup": "македоно-румунська", + "rus": "російська", + "sad": "сандаве", + "sag": "санго", + "sah": "якутська", + "sam": "арамейська (самаритянська)", + "san": "санскрит", + "sas": "сасакська", + "sat": "санталі", + "scn": "сицилійська", + "sco": "шотландська", + "sel": "селькупська", + "sga": "давньоірландська (до 900)", + "shn": "шан", + "sid": "сидама", + "sin": "сингалійська", + "slk": "словацька", + "slv": "словенська", + "sma": "саамська (південна)", + "sme": "саамська (північна)", + "smj": "лулесаамська", + "smn": "саамська (інарі)", + "smo": "самоанська", + "sms": "саамська (сколт)", + "sna": "шона", + "snd": "синдхі", + "snk": "сонікійська", + "sog": "согдійська", + "som": "сомалійська", + "sot": "сото; південна", + "spa": "іспанська", + "sqi": "албанська", + "srd": "сардинська", + "srn": "сранан-тонго", + "srp": "сербська", + "srr": "серер", + "ssw": "свазі", + "suk": "сукума", + "sun": "сунданська", + "sus": "сусу", + "sux": "шумерська", + "swa": "суахілі (макромова)", + "swe": "шведська", + "syc": "сирійська (класична)", + "syr": "сирійська", + "tah": "таїтянська", + "tam": "тамільська", + "tat": "татарська", + "tel": "телугу", + "tem": "тімне", + "ter": "терено", + "tet": "тетум", + "tgk": "таджицька", + "tgl": "тагалог", + "tha": "таїландська", + "tig": "тігре", + "tir": "тигринійська", + "tiv": "тиві", + "tkl": "токелау", + "tlh": "клінгонська", + "tli": "тлінгіт", + "tmh": "тамашек", + "tog": "тонга (ньяса)", + "ton": "тонга (острови Тонга)", + "tpi": "ток-пісін", + "tsi": "цимшіан", + "tsn": "тсвана", + "tso": "цонга", + "tuk": "туркменська", + "tum": "тумбука", + "tur": "турецька", + "tvl": "тувалу", + "twi": "тві", + "tyv": "тувінська", + "udm": "удмурдська", + "uga": "угаритська", + "uig": "уйгурська", + "ukr": "українська", + "umb": "умбунду", + "und": "невизначена", + "urd": "урду", + "uzb": "узбецька", + "vai": "вай", + "ven": "венда", + "vie": "в'єтнамська", + "vol": "волапюк", + "vot": "водська", + "wal": "волайтта", + "war": "варай (Філіппіни)", + "was": "вашо", + "wln": "валлонська", + "wol": "волоф", + "xal": "калмицька", + "xho": "хоза", + "yao": "яо", + "yap": "япська", + "yid": "ідиш", + "yor": "йоруба", + "zap": "сапотецька", + "zbl": "бліссимволіка", + "zen": "зеназька", + "zha": "чжуань", + "zho": "китайська", + "zul": "зулуська", + "zun": "зуні", + "zxx": "немає мовних даних", + "zza": "заза" + }, + "zh_Hans_CN": { + "aar": "阿法尔语", + "abk": "阿布哈兹语", + "ace": "亚齐语", + "ach": "阿乔利语", + "ada": "阿当梅语", + "ady": "阿迪格语", + "afh": "阿弗里希利语", + "afr": "南非荷兰语", + "ain": "阿伊努语(日本)", + "aka": "阿坎语", + "akk": "阿卡德语", + "ale": "阿留申语", + "alt": "阿尔泰语(南)", + "amh": "阿姆哈拉语", + "ang": "英语(上古,约 450-1100)", + "anp": "安吉卡语", + "ara": "阿拉伯语", + "arc": "阿拉米语(官方,公元前 700-300)", + "arg": "阿拉贡语", + "arn": "阿劳坎语", + "arp": "阿拉帕霍语", + "arw": "阿拉瓦克语", + "asm": "阿萨姆语", + "ast": "阿斯图里亚斯语", + "ava": "阿瓦尔语", + "ave": "阿维斯陀语", + "awa": "阿瓦德语", + "aym": "艾马拉语", + "aze": "阿塞拜疆语", + "bak": "巴什基尔语", + "bal": "俾路支语", + "bam": "班巴拉语", + "ban": "巴厘语", + "bas": "巴萨语(喀麦隆)", + "bej": "贝扎语", + "bel": "白俄罗斯语", + "bem": "本巴语(赞比亚)", + "ben": "孟加拉语", + "bho": "博杰普尔语", + "bik": "比科尔语", + "bin": "比尼语", + "bis": "比斯拉马语", + "bla": "西克西卡语", + "bod": "藏语", + "bos": "波斯尼亚语", + "bra": "布拉吉语", + "bre": "布列塔尼语", + "bua": "布里亚特语", + "bug": "布吉语", + "bul": "保加利亚语", + "byn": "比林语", + "cad": "卡多语", + "car": "加勒比语", + "cat": "加泰罗尼亚语", + "ceb": "宿务语", + "ces": "捷克语", + "cha": "查莫罗语", + "chb": "奇布查语", + "che": "车臣语", + "chg": "察合台语", + "chk": "丘克语", + "chm": "马里语(俄罗斯)", + "chn": "奇努克混合语", + "cho": "乔克托语", + "chp": "奇佩维安语", + "chr": "切罗基语", + "chu": "斯拉夫语(古教会)", + "chv": "楚瓦什语", + "chy": "夏延语", + "cop": "科普特语", + "cor": "康沃尔语", + "cos": "科西嘉语", + "cre": "克里语", + "crh": "鞑靼语(克里米亚)", + "csb": "卡舒比语", + "cym": "威尔士语", + "dak": "达科他语", + "dan": "丹麦语", + "dar": "达尔格瓦语", + "del": "特拉华语", + "den": "史拉维语(阿沙巴斯甘)", + "deu": "德语", + "dgr": "多格里布语", + "din": "丁卡语", + "div": "迪维希语", + "doi": "多格拉语", + "dsb": "索布语(下)", + "dua": "杜亚拉语", + "dum": "荷兰语(中古,约 1050-1350)", + "dyu": "迪尤拉语", + "dzo": "宗喀语", + "efi": "埃菲克语", + "egy": "埃及语(古)", + "eka": "埃克丘克语", + "ell": "希腊语(现代,1453-)", + "elx": "埃兰语", + "eng": "英语", + "enm": "英语(中古,1100-1500)", + "epo": "世界语", + "est": "爱沙尼亚语", + "eus": "巴斯克语", + "ewe": "埃维语", + "ewo": "埃翁多语", + "fan": "芳语(赤道几内亚)", + "fao": "法罗语", + "fas": "波斯语", + "fat": "芳蒂语", + "fij": "斐济语", + "fil": "菲律宾语", + "fin": "芬兰语", + "fon": "丰语", + "fra": "法语", + "frm": "法语(中古,约 1400-1600)", + "fro": "法语(上古,842-约 1400)", + "frr": "弗里西语(北)", + "frs": "弗里西亚语(东)", + "fry": "弗里西亚语(西)", + "ful": "富拉语", + "fur": "弗留利语", + "gaa": "加语", + "gay": "卡约语", + "gba": "巴亚语(中非共和国)", + "gez": "吉兹语", + "gil": "吉尔伯特语", + "gla": "盖尔语(苏格兰)", + "gle": "爱尔兰语", + "glg": "加利西亚语", + "glv": "马恩岛语", + "gmh": "德语(中古高地,约 1050-1500)", + "goh": "德语(上古高地,约 750-1050)", + "gon": "贡德语", + "gor": "哥伦打洛语", + "got": "哥特语", + "grb": "格列博语", + "grc": "希腊语(古典,直到 1453)", + "grn": "瓜拉尼语", + "gsw": "德语(瑞士)", + "guj": "古吉拉特语", + "gwi": "库臣语", + "hai": "海达语", + "hat": "克里奥尔语(海地)", + "hau": "豪萨语", + "haw": "夏威夷语", + "heb": "希伯来语", + "her": "赫雷罗语", + "hil": "希利盖农语", + "hin": "印地语", + "hit": "赫梯语", + "hmn": "苗语", + "hmo": "希里莫图语", + "hrv": "克罗地亚语", + "hsb": "索布语(上)", + "hun": "匈牙利语", + "hup": "胡帕语", + "hye": "亚美尼亚语", + "iba": "伊班语", + "ibo": "伊博语", + "ido": "伊多语", + "iii": "彝语(四川)", + "iku": "伊努伊特语", + "ile": "国际语(西方)", + "ilo": "伊洛卡诺语", + "ina": "国际语", + "ind": "印尼语", + "inh": "印古什语", + "ipk": "依努庇克语", + "isl": "冰岛语", + "ita": "意大利语", + "jav": "爪哇语", + "jbo": "逻辑语", + "jpn": "日语", + "jpr": "犹太-波斯语", + "jrb": "犹太-阿拉伯语", + "kaa": "卡拉卡尔帕克语", + "kab": "卡布列语", + "kac": "景颇语", + "kal": "格陵兰语", + "kam": "坎巴语(肯尼亚)", + "kan": "卡纳达语", + "kas": "克什米尔语", + "kat": "格鲁吉亚语", + "kau": "卡努里语", + "kaw": "卡威语", + "kaz": "哈萨克语", + "kbd": "卡巴尔达语", + "kha": "卡西语", + "khm": "高棉语", + "kho": "和田语", + "kik": "基库尤语", + "kin": "基尼阿万达语", + "kir": "吉尔吉斯语", + "kmb": "金本杜语", + "kok": "孔卡尼语", + "kom": "科米语", + "kon": "刚果语", + "kor": "朝鲜语", + "kos": "科斯拉伊语", + "kpe": "克佩勒语", + "krc": "卡拉恰伊-巴尔卡尔语", + "krl": "卡累利阿语", + "kru": "库卢克语", + "kua": "宽亚玛语", + "kum": "库梅克语", + "kur": "库尔德语", + "kut": "库特内语", + "lad": "拉迪诺语", + "lah": "拉亨达语", + "lam": "兰巴语", + "lao": "老挝语", + "lat": "拉丁语", + "lav": "拉脱维亚语", + "lez": "列兹金语", + "lim": "林堡语", + "lin": "林加拉语", + "lit": "立陶宛语", + "lol": "芒戈语", + "loz": "洛齐语", + "ltz": "卢森堡语", + "lua": "卢巴-卢拉语", + "lub": "卢巴-加丹加语", + "lug": "干达语", + "lui": "卢伊塞诺语", + "lun": "隆达语", + "luo": "卢奥语(肯尼亚和坦桑尼亚)", + "lus": "卢萨语", + "mad": "马都拉语", + "mag": "摩揭陀语", + "mah": "马绍尔语", + "mai": "米德勒语", + "mak": "望加锡语", + "mal": "马拉雅拉姆语", + "man": "曼丁哥语", + "mar": "马拉地语", + "mas": "马萨伊语", + "mdf": "莫克沙语", + "mdr": "曼达语", + "men": "门德语(塞拉利昂)", + "mga": "爱尔兰语(中古,900-1200)", + "mic": "米克马克语", + "min": "米南卡保语", + "mis": "未被编码的语言", + "mkd": "马其顿语", + "mlg": "马达加斯加语", + "mlt": "马耳他语", + "mnc": "满语", + "mni": "曼尼普尔语", + "moh": "莫霍克语", + "mon": "蒙古语", + "mos": "莫西语", + "mri": "毛利语", + "msa": "马来语族", + "mul": "多种语言", + "mus": "克里克语", + "mwl": "米兰德斯语", + "mwr": "马尔瓦利语", + "mya": "缅甸语", + "myv": "厄尔兹亚语", + "nap": "拿坡里语", + "nau": "瑙鲁语", + "nav": "纳瓦霍语", + "nbl": "恩德贝勒语(南)", + "nde": "恩德贝勒语(北)", + "ndo": "恩敦加语", + "nds": "撒克逊语(低地)", + "nep": "尼泊尔语", + "new": "尼瓦尔语", + "nia": "尼亚斯语", + "niu": "纽埃语", + "nld": "荷兰语", + "nno": "新挪威语", + "nob": "挪威布克莫尔语", + "nog": "诺盖语", + "non": "诺尔斯语(古)", + "nor": "挪威语", + "nqo": "西非书面语言字母", + "nso": "索托语(北)", + "nwc": "尼瓦尔语(古典)", + "nya": "尼扬贾语", + "nym": "尼扬韦齐语", + "nyn": "尼扬科勒语", + "nyo": "尼奥罗语", + "nzi": "恩济马语", + "oci": "奥克西唐语(1500 后)", + "oji": "奥吉布瓦语", + "ori": "奥利亚语", + "orm": "奥罗莫语", + "osa": "奥萨格语", + "oss": "奥塞梯语", + "ota": "土耳其语(奥斯曼,1500-1928)", + "pag": "邦阿西楠语", + "pal": "钵罗钵语", + "pam": "邦板牙语", + "pan": "旁遮普语", + "pap": "帕皮亚门托语", + "pau": "帕劳语", + "peo": "波斯语(古,公元前约 600-400)", + "phn": "腓尼基语", + "pli": "巴利语", + "pol": "波兰语", + "pon": "波纳佩语", + "por": "葡萄牙语", + "pro": "普罗旺斯语(古,至 1500)", + "pus": "普什图语", + "que": "克丘亚语", + "raj": "拉贾斯坦语", + "rap": "拉帕努伊语", + "rar": "拉罗汤加语", + "roh": "罗曼什语", + "rom": "罗姆语", + "ron": "罗马尼亚语", + "run": "基隆迪语", + "rup": "阿罗马尼亚语", + "rus": "俄语", + "sad": "桑达韦语", + "sag": "桑戈语", + "sah": "雅库特语", + "sam": "阿拉米语(萨马利亚)", + "san": "梵语", + "sas": "萨萨克语", + "sat": "桑塔利语", + "scn": "西西里语", + "sco": "苏格兰语", + "sel": "塞尔库普语", + "sga": "爱尔兰语(古,至 900)", + "shn": "掸语", + "sid": "锡达莫语", + "sin": "僧加罗语", + "slk": "斯洛伐克语", + "slv": "斯洛文尼亚语", + "sma": "萨米语(南)", + "sme": "萨米语(北)", + "smj": "律勒欧-萨米语", + "smn": "伊纳里-萨米语", + "smo": "萨摩亚语", + "sms": "斯科特-萨米语", + "sna": "修纳语", + "snd": "信德语", + "snk": "索宁克语", + "sog": "粟特语", + "som": "索马里语", + "sot": "索托语(南)", + "spa": "西班牙语", + "sqi": "阿尔巴尼亚语", + "srd": "撒丁语", + "srn": "苏里南汤加语", + "srp": "塞尔维亚语", + "srr": "塞雷尔语", + "ssw": "斯瓦特语", + "suk": "苏库马语", + "sun": "巽他语", + "sus": "苏苏语", + "sux": "苏美尔语", + "swa": "斯瓦希里语族", + "swe": "瑞典语", + "syc": "叙利亚语(古典)", + "syr": "古叙利亚语", + "tah": "塔希提语", + "tam": "泰米尔语", + "tat": "塔塔尔语", + "tel": "泰卢固语", + "tem": "滕内语", + "ter": "特列纳语", + "tet": "特塔姆语", + "tgk": "塔吉克语", + "tgl": "塔加洛语", + "tha": "泰语", + "tig": "提格雷语", + "tir": "提格里尼亚语", + "tiv": "蒂夫语", + "tkl": "托克劳语", + "tlh": "克林贡语", + "tli": "特林吉特语", + "tmh": "塔马舍克语", + "tog": "汤加语 (尼亚萨)", + "ton": "汤加语(汤加岛)", + "tpi": "托克皮辛语", + "tsi": "钦西安语", + "tsn": "茨瓦纳语", + "tso": "聪加语", + "tuk": "土库曼语", + "tum": "奇图姆布卡语", + "tur": "土耳其语", + "tvl": "图瓦卢语", + "twi": "契维语", + "tyv": "图瓦语", + "udm": "乌德穆尔特语", + "uga": "乌加里特语", + "uig": "维吾尔语", + "ukr": "乌克兰语", + "umb": "翁本杜语", + "und": "未确定的语言", + "urd": "乌尔都语", + "uzb": "乌兹别克语", + "vai": "瓦伊语", + "ven": "文达语", + "vie": "越南语", + "vol": "沃拉普克语", + "vot": "沃提克语", + "wal": "瓦拉莫语", + "war": "瓦赖语(菲律宾)", + "was": "瓦肖语", + "wln": "瓦龙语", + "wol": "沃洛夫语", + "xal": "卡尔梅克语", + "xho": "科萨语", + "yao": "瑶语", + "yap": "雅浦语", + "yid": "依地语", + "yor": "约鲁巴语", + "zap": "萨波特克语", + "zbl": "布利斯符号", + "zen": "哲纳加语", + "zha": "壮语", + "zho": "中文", + "zul": "祖鲁语", + "zun": "祖尼语", "zxx": "No linguistic content", - "zza": "Zaza" + "zza": "扎扎其语" }, "en": { "aar": "Afar", diff --git a/cps/jinjia.py b/cps/jinjia.py index c91534eb..688d1fba 100644 --- a/cps/jinjia.py +++ b/cps/jinjia.py @@ -25,7 +25,7 @@ from __future__ import division, print_function, unicode_literals import datetime import mimetypes -import re +from uuid import uuid4 from babel.dates import format_date from flask import Blueprint, request, url_for @@ -44,6 +44,8 @@ log = logger.create() def url_for_other_page(page): args = request.view_args.copy() args['page'] = page + for get, val in request.args.items(): + args[get] = val return url_for(request.endpoint, **args) @@ -76,22 +78,18 @@ def mimetype_filter(val): @jinjia.app_template_filter('formatdate') def formatdate_filter(val): try: - conformed_timestamp = re.sub(r"[:]|([-](?!((\d{2}[:]\d{2})|(\d{4}))$))", '', val) - formatdate = datetime.datetime.strptime(conformed_timestamp[:15], "%Y%m%d %H%M%S") - return format_date(formatdate, format='medium', locale=get_locale()) + return format_date(val, format='medium', locale=get_locale()) except AttributeError as e: log.error('Babel error: %s, Current user locale: %s, Current User: %s', e, current_user.locale, current_user.nickname ) - return formatdate + return val @jinjia.app_template_filter('formatdateinput') def format_date_input(val): - conformed_timestamp = re.sub(r"[:]|([-](?!((\d{2}[:]\d{2})|(\d{4}))$))", '', val) - date_obj = datetime.datetime.strptime(conformed_timestamp[:15], "%Y%m%d %H%M%S") - input_date = date_obj.isoformat().split('T', 1)[0] # Hack to support dates <1900 + input_date = val.isoformat().split('T', 1)[0] # Hack to support dates <1900 return '' if input_date == "0101-01-01" else input_date @@ -112,9 +110,26 @@ def timestamptodate(date, fmt=None): def yesno(value, yes, no): return yes if value else no + @jinjia.app_template_filter('formatfloat') def formatfloat(value, decimals=1): formatedstring = '%d' % value if (value % 1) != 0: formatedstring = ('%s.%d' % (formatedstring, (value % 1) * 10**decimals)).rstrip('0') return formatedstring + + +@jinjia.app_template_filter('formatseriesindex') +def formatseriesindex_filter(series_index): + if series_index: + if int(series_index) - series_index == 0: + return int(series_index) + else: + return series_index + return 0 + +@jinjia.app_template_filter('uuidfilter') +def uuidfilter(var): + return uuid4() + + diff --git a/cps/kobo.py b/cps/kobo.py index a6dfc3f6..b5d5397b 100644 --- a/cps/kobo.py +++ b/cps/kobo.py @@ -39,22 +39,25 @@ from flask import ( redirect, abort ) -from flask_login import current_user, login_required +from flask_login import current_user from werkzeug.datastructures import Headers from sqlalchemy import func -from sqlalchemy.sql.expression import and_, or_ +from sqlalchemy.sql.expression import and_ from sqlalchemy.exc import StatementError import requests from . import config, logger, kobo_auth, db, calibre_db, helper, shelf as shelf_lib, ub +from .helper import get_download_link from .services import SyncToken as SyncToken from .web import download_required -from .kobo_auth import requires_kobo_auth +from .kobo_auth import requires_kobo_auth, get_auth_token KOBO_FORMATS = {"KEPUB": ["KEPUB"], "EPUB": ["EPUB3", "EPUB"]} KOBO_STOREAPI_URL = "https://storeapi.kobo.com" KOBO_IMAGEHOST_URL = "https://kbimages1-a.akamaihd.net" +SYNC_ITEM_LIMIT = 100 + kobo = Blueprint("kobo", __name__, url_prefix="/kobo/") kobo_auth.disable_failed_auth_redirect_for_blueprint(kobo) kobo_auth.register_url_value_preprocessor(kobo) @@ -119,7 +122,11 @@ def redirect_or_proxy_request(): def convert_to_kobo_timestamp_string(timestamp): - return timestamp.strftime("%Y-%m-%dT%H:%M:%SZ") + try: + return timestamp.strftime("%Y-%m-%dT%H:%M:%SZ") + except AttributeError as exc: + log.debug("Timestamp not valid: {}".format(exc)) + return datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%SZ") @kobo.route("/v1/library/sync") @@ -137,68 +144,84 @@ def HandleSyncRequest(): new_books_last_modified = sync_token.books_last_modified new_books_last_created = sync_token.books_last_created new_reading_state_last_modified = sync_token.reading_state_last_modified + new_archived_last_modified = datetime.datetime.min sync_results = [] # We reload the book database so that the user get's a fresh view of the library # in case of external changes (e.g: adding a book through Calibre). calibre_db.reconnect_db(config, ub.app_DB_path) - archived_books = ( - ub.session.query(ub.ArchivedBook) - .filter(ub.ArchivedBook.user_id == int(current_user.id)) - .all() - ) - - # We join-in books that have had their Archived bit recently modified in order to either: - # * Restore them to the user's device. - # * Delete them from the user's device. - # (Ideally we would use a join for this logic, however cross-database joins don't look trivial in SqlAlchemy.) - recently_restored_or_archived_books = [] - archived_book_ids = {} - new_archived_last_modified = datetime.datetime.min - for archived_book in archived_books: - if archived_book.last_modified > sync_token.archive_last_modified: - recently_restored_or_archived_books.append(archived_book.book_id) - if archived_book.is_archived: - archived_book_ids[archived_book.book_id] = True - new_archived_last_modified = max( - new_archived_last_modified, archived_book.last_modified) - - # sqlite gives unexpected results when performing the last_modified comparison without the datetime cast. - # It looks like it's treating the db.Books.last_modified field as a string and may fail - # the comparison because of the +00:00 suffix. - changed_entries = ( - calibre_db.session.query(db.Books) - .join(db.Data) - .filter(or_(func.datetime(db.Books.last_modified) > sync_token.books_last_modified, - db.Books.id.in_(recently_restored_or_archived_books))) - .filter(db.Data.format.in_(KOBO_FORMATS)) - .all() - ) + if sync_token.books_last_id > -1: + changed_entries = ( + calibre_db.session.query(db.Books, ub.ArchivedBook.last_modified, ub.ArchivedBook.is_archived) + .join(db.Data).outerjoin(ub.ArchivedBook, db.Books.id == ub.ArchivedBook.book_id) + .filter(db.Books.last_modified >= sync_token.books_last_modified) + .filter(db.Books.id>sync_token.books_last_id) + .filter(db.Data.format.in_(KOBO_FORMATS)) + .order_by(db.Books.last_modified) + .order_by(db.Books.id) + .limit(SYNC_ITEM_LIMIT) + ) + else: + changed_entries = ( + calibre_db.session.query(db.Books, ub.ArchivedBook.last_modified, ub.ArchivedBook.is_archived) + .join(db.Data).outerjoin(ub.ArchivedBook, db.Books.id == ub.ArchivedBook.book_id) + .filter(db.Books.last_modified > sync_token.books_last_modified) + .filter(db.Data.format.in_(KOBO_FORMATS)) + .order_by(db.Books.last_modified) + .order_by(db.Books.id) + .limit(SYNC_ITEM_LIMIT) + ) reading_states_in_new_entitlements = [] for book in changed_entries: - kobo_reading_state = get_or_create_reading_state(book.id) + formats = [data.format for data in book.Books.data] + if not 'KEPUB' in formats and config.config_kepubifypath and 'EPUB' in formats: + helper.convert_book_format(book.Books.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.nickname) + + kobo_reading_state = get_or_create_reading_state(book.Books.id) entitlement = { - "BookEntitlement": create_book_entitlement(book, archived=(book.id in archived_book_ids)), - "BookMetadata": get_metadata(book), + "BookEntitlement": create_book_entitlement(book.Books, archived=(book.is_archived == True)), + "BookMetadata": get_metadata(book.Books), } if kobo_reading_state.last_modified > sync_token.reading_state_last_modified: - entitlement["ReadingState"] = get_kobo_reading_state_response(book, kobo_reading_state) + entitlement["ReadingState"] = get_kobo_reading_state_response(book.Books, kobo_reading_state) new_reading_state_last_modified = max(new_reading_state_last_modified, kobo_reading_state.last_modified) - reading_states_in_new_entitlements.append(book.id) + reading_states_in_new_entitlements.append(book.Books.id) - if book.timestamp > sync_token.books_last_created: + if book.Books.timestamp > sync_token.books_last_created: sync_results.append({"NewEntitlement": entitlement}) else: sync_results.append({"ChangedEntitlement": entitlement}) new_books_last_modified = max( - book.last_modified, new_books_last_modified + book.Books.last_modified, new_books_last_modified ) - new_books_last_created = max(book.timestamp, new_books_last_created) + new_books_last_created = max(book.Books.timestamp, new_books_last_created) + max_change = (changed_entries + .from_self() + .filter(ub.ArchivedBook.is_archived) + .order_by(func.datetime(ub.ArchivedBook.last_modified).desc()) + .first() + ) + if max_change: + max_change = max_change.last_modified + else: + max_change = new_archived_last_modified + new_archived_last_modified = max(new_archived_last_modified, max_change) + + # no. of books returned + book_count = changed_entries.count() + + # last entry: + if book_count: + books_last_id = changed_entries.all()[-1].Books.id or -1 + else: + books_last_id = -1 + + # generate reading state data changed_reading_states = ( ub.session.query(ub.KoboReadingState) .filter(and_(func.datetime(ub.KoboReadingState.last_modified) > sync_token.reading_state_last_modified, @@ -220,11 +243,12 @@ def HandleSyncRequest(): sync_token.books_last_modified = new_books_last_modified sync_token.archive_last_modified = new_archived_last_modified sync_token.reading_state_last_modified = new_reading_state_last_modified + sync_token.books_last_id = books_last_id - return generate_sync_response(sync_token, sync_results) + return generate_sync_response(sync_token, sync_results, book_count) -def generate_sync_response(sync_token, sync_results): +def generate_sync_response(sync_token, sync_results, set_cont=False): extra_headers = {} if config.config_kobo_proxy: # Merge in sync results from the official Kobo store. @@ -240,6 +264,8 @@ def generate_sync_response(sync_token, sync_results): except Exception as e: log.error("Failed to receive or parse response from Kobo's sync endpoint: " + str(e)) + if set_cont: + extra_headers["x-kobo-sync"] = "continue" sync_token.to_headers(extra_headers) response = make_response(jsonify(sync_results), extra_headers) @@ -270,10 +296,11 @@ def get_download_url_for_book(book, book_format): else: host = request.host - return "{url_scheme}://{url_base}:{url_port}/download/{book_id}/{book_format}".format( + return "{url_scheme}://{url_base}:{url_port}/kobo/{auth_token}/download/{book_id}/{book_format}".format( url_scheme=request.scheme, url_base=host, url_port=config.config_external_port, + auth_token=get_auth_token(), book_id=book.id, book_format=book_format.lower() ) @@ -344,7 +371,9 @@ def get_seriesindex(book): def get_metadata(book): download_urls = [] - for book_data in book.data: + kepub = [data for data in book.data if data.format == 'KEPUB'] + + for book_data in kepub if len(kepub) > 0 else book.data: if book_data.format not in KOBO_FORMATS: continue for kobo_format in KOBO_FORMATS[book_data.format]: @@ -435,8 +464,7 @@ def HandleTagCreate(): items_unknown_to_calibre = add_items_to_shelf(items, shelf) if items_unknown_to_calibre: log.debug("Received request to add unknown books to a collection. Silently ignoring items.") - ub.session.commit() - + ub.session_commit() return make_response(jsonify(str(shelf.uuid)), 201) @@ -468,7 +496,7 @@ def HandleTagUpdate(tag_id): shelf.name = name ub.session.merge(shelf) - ub.session.commit() + ub.session_commit() return make_response(' ', 200) @@ -520,8 +548,7 @@ def HandleTagAddItem(tag_id): log.debug("Received request to add an unknown book to a collection. Silently ignoring item.") ub.session.merge(shelf) - ub.session.commit() - + ub.session_commit() return make_response('', 201) @@ -561,7 +588,7 @@ def HandleTagRemoveItem(tag_id): shelf.books.filter(ub.BookShelf.book_id == book.id).delete() except KeyError: items_unknown_to_calibre.append(item) - ub.session.commit() + ub.session_commit() if items_unknown_to_calibre: log.debug("Received request to remove an unknown book to a collecition. Silently ignoring item.") @@ -607,7 +634,7 @@ def sync_shelves(sync_token, sync_results): "ChangedTag": tag }) sync_token.tags_last_modified = new_tags_last_modified - ub.session.commit() + ub.session_commit() # Creates a Kobo "Tag" object from a ub.Shelf object @@ -688,7 +715,7 @@ def HandleStateRequest(book_uuid): abort(400, description="Malformed request data is missing 'ReadingStates' key") ub.session.merge(kobo_reading_state) - ub.session.commit() + ub.session_commit() return jsonify({ "RequestResult": "Success", "UpdateResults": [update_results_response], @@ -726,7 +753,7 @@ def get_or_create_reading_state(book_id): kobo_reading_state.statistics = ub.KoboStatistics() book_read.kobo_reading_state = kobo_reading_state ub.session.add(book_read) - ub.session.commit() + ub.session_commit() return book_read.kobo_reading_state @@ -829,8 +856,7 @@ def HandleBookDeletionRequest(book_uuid): archived_book.last_modified = datetime.datetime.utcnow() ub.session.merge(archived_book) - ub.session.commit() - + ub.session_commit() return ("", 204) @@ -856,6 +882,7 @@ def HandleUserRequest(dummy=None): @kobo.route("/v1/products//recommendations", methods=["GET", "POST"]) @kobo.route("/v1/products//nextread", methods=["GET", "POST"]) @kobo.route("/v1/products//reviews", methods=["GET", "POST"]) +@kobo.route("/v1/products/books/external/", methods=["GET", "POST"]) @kobo.route("/v1/products/books/series/", methods=["GET", "POST"]) @kobo.route("/v1/products/books/", methods=["GET", "POST"]) @kobo.route("/v1/products/dailydeal", methods=["GET", "POST"]) @@ -865,17 +892,6 @@ def HandleProductsRequest(dummy=None): return redirect_or_proxy_request() -'''@kobo.errorhandler(404) -def handle_404(err): - # This handler acts as a catch-all for endpoints that we don't have an interest in - # implementing (e.g: v1/analytics/gettests, v1/user/recommendations, etc) - if err: - print('404') - return jsonify(error=str(err)), 404 - log.debug("Unknown Request received: %s, method: %s, data: %s", request.base_url, request.method, request.data) - return redirect_or_proxy_request()''' - - def make_calibre_web_auth_response(): # As described in kobo_auth.py, CalibreWeb doesn't make use practical use of this auth/device API call for # authentation (nor for authorization). We return a dummy response just to keep the device happy. @@ -919,7 +935,7 @@ def HandleInitRequest(): store_response_json = store_response.json() if "Resources" in store_response_json: kobo_resources = store_response_json["Resources"] - except: + except Exception: log.error("Failed to receive or parse response from Kobo's init endpoint. Falling back to un-proxied mode.") if not kobo_resources: kobo_resources = NATIVE_KOBO_RESOURCES() @@ -976,6 +992,13 @@ def HandleInitRequest(): return response +@kobo.route("/download//") +@requires_kobo_auth +@download_required +def download_book(book_id, book_format): + return get_download_link(book_id, book_format, "kobo") + + def NATIVE_KOBO_RESOURCES(): return { "account_page": "https://secure.kobobooks.com/profile", diff --git a/cps/kobo_auth.py b/cps/kobo_auth.py index 0f6cd174..8f18db49 100644 --- a/cps/kobo_auth.py +++ b/cps/kobo_auth.py @@ -64,11 +64,11 @@ from datetime import datetime from os import urandom from flask import g, Blueprint, url_for, abort, request -from flask_login import login_user, login_required +from flask_login import login_user, current_user, login_required from flask_babel import gettext as _ -from . import logger, ub, lm -from .web import render_title_template +from . import logger, config, calibre_db, db, helper, ub, lm +from .render_template import render_title_template try: from functools import wraps @@ -147,7 +147,15 @@ def generate_auth_token(user_id): auth_token.token_type = 1 ub.session.add(auth_token) - ub.session.commit() + ub.session_commit() + + books = calibre_db.session.query(db.Books).join(db.Data).all() + + for book in books: + formats = [data.format for data in book.data] + if not 'KEPUB' in formats and config.config_kepubifypath and 'EPUB' in formats: + helper.convert_book_format(book.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.nickname) + return render_title_template( "generate_kobo_auth_url.html", title=_(u"Kobo Setup"), @@ -164,5 +172,5 @@ def delete_auth_token(user_id): # Invalidate any prevously generated Kobo Auth token for this user. ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.user_id == user_id)\ .filter(ub.RemoteAuthToken.token_type==1).delete() - ub.session.commit() - return "" + + return ub.session_commit() diff --git a/cps/logger.py b/cps/logger.py index f13d75d3..b204de31 100644 --- a/cps/logger.py +++ b/cps/logger.py @@ -41,10 +41,37 @@ logging.addLevelName(logging.WARNING, "WARN") logging.addLevelName(logging.CRITICAL, "CRIT") +class _Logger(logging.Logger): + + def debug_or_exception(self, message, *args, **kwargs): + if sys.version_info > (3, 7): + if is_debug_enabled(): + self.exception(message, stacklevel=2, *args, **kwargs) + else: + self.error(message, stacklevel=2, *args, **kwargs) + elif sys.version_info > (3, 0): + if is_debug_enabled(): + self.exception(message, stack_info=True, *args, **kwargs) + else: + self.error(message, *args, **kwargs) + else: + if is_debug_enabled(): + self.exception(message, *args, **kwargs) + else: + self.error(message, *args, **kwargs) + + + def debug_no_auth(self, message, *args, **kwargs): + if message.startswith("send: AUTH"): + self.debug(message[:16], stacklevel=2, *args, **kwargs) + else: + self.debug(message, stacklevel=2, *args, **kwargs) + + + def get(name=None): return logging.getLogger(name) - def create(): parent_frame = inspect.stack(0)[1] if hasattr(parent_frame, 'frame'): @@ -54,7 +81,6 @@ def create(): parent_module = inspect.getmodule(parent_frame) return get(parent_module.__name__) - def is_debug_enabled(): return logging.root.level <= logging.DEBUG @@ -99,6 +125,7 @@ def setup(log_file, log_level=None): May be called multiple times. ''' log_level = log_level or DEFAULT_LOG_LEVEL + logging.setLoggerClass(_Logger) logging.getLogger(__package__).setLevel(log_level) r = logging.root @@ -126,11 +153,11 @@ def setup(log_file, log_level=None): file_handler.baseFilename = log_file else: try: - file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2) + file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2, encoding='utf-8') except IOError: if log_file == DEFAULT_LOG_FILE: raise - file_handler = RotatingFileHandler(DEFAULT_LOG_FILE, maxBytes=50000, backupCount=2) + file_handler = RotatingFileHandler(DEFAULT_LOG_FILE, maxBytes=50000, backupCount=2, encoding='utf-8') log_file = "" file_handler.setFormatter(FORMATTER) @@ -152,11 +179,11 @@ def create_access_log(log_file, log_name, formatter): access_log.propagate = False access_log.setLevel(logging.INFO) try: - file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2) + file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2, encoding='utf-8') except IOError: if log_file == DEFAULT_ACCESS_LOG: raise - file_handler = RotatingFileHandler(DEFAULT_ACCESS_LOG, maxBytes=50000, backupCount=2) + file_handler = RotatingFileHandler(DEFAULT_ACCESS_LOG, maxBytes=50000, backupCount=2, encoding='utf-8') log_file = "" file_handler.setFormatter(formatter) diff --git a/cps/oauth.py b/cps/oauth.py index 67ef2703..a8995180 100644 --- a/cps/oauth.py +++ b/cps/oauth.py @@ -19,7 +19,6 @@ from __future__ import division, print_function, unicode_literals from flask import session - try: from flask_dance.consumer.backend.sqla import SQLAlchemyBackend, first, _get_real_user from sqlalchemy.orm.exc import NoResultFound @@ -34,134 +33,131 @@ except ImportError: except ImportError: pass -try: - class OAuthBackend(SQLAlchemyBackend): - """ - Stores and retrieves OAuth tokens using a relational database through - the `SQLAlchemy`_ ORM. - .. _SQLAlchemy: https://www.sqlalchemy.org/ - """ - def __init__(self, model, session, provider_id, - user=None, user_id=None, user_required=None, anon_user=None, - cache=None): - self.provider_id = provider_id - super(OAuthBackend, self).__init__(model, session, user, user_id, user_required, anon_user, cache) +class OAuthBackend(SQLAlchemyBackend): + """ + Stores and retrieves OAuth tokens using a relational database through + the `SQLAlchemy`_ ORM. - def get(self, blueprint, user=None, user_id=None): - if self.provider_id + '_oauth_token' in session and session[self.provider_id + '_oauth_token'] != '': - return session[self.provider_id + '_oauth_token'] - # check cache - cache_key = self.make_cache_key(blueprint=blueprint, user=user, user_id=user_id) - token = self.cache.get(cache_key) - if token: - return token - - # if not cached, make database queries - query = ( - self.session.query(self.model) - .filter_by(provider=self.provider_id) - ) - uid = first([user_id, self.user_id, blueprint.config.get("user_id")]) - u = first(_get_real_user(ref, self.anon_user) - for ref in (user, self.user, blueprint.config.get("user"))) - - use_provider_user_id = False - if self.provider_id + '_oauth_user_id' in session and session[self.provider_id + '_oauth_user_id'] != '': - query = query.filter_by(provider_user_id=session[self.provider_id + '_oauth_user_id']) - use_provider_user_id = True - - if self.user_required and not u and not uid and not use_provider_user_id: - # raise ValueError("Cannot get OAuth token without an associated user") - return None - # check for user ID - if hasattr(self.model, "user_id") and uid: - query = query.filter_by(user_id=uid) - # check for user (relationship property) - elif hasattr(self.model, "user") and u: - query = query.filter_by(user=u) - # if we have the property, but not value, filter by None - elif hasattr(self.model, "user_id"): - query = query.filter_by(user_id=None) - # run query - try: - token = query.one().token - except NoResultFound: - token = None - - # cache the result - self.cache.set(cache_key, token) + .. _SQLAlchemy: https://www.sqlalchemy.org/ + """ + def __init__(self, model, session, provider_id, + user=None, user_id=None, user_required=None, anon_user=None, + cache=None): + self.provider_id = provider_id + super(OAuthBackend, self).__init__(model, session, user, user_id, user_required, anon_user, cache) + def get(self, blueprint, user=None, user_id=None): + if self.provider_id + '_oauth_token' in session and session[self.provider_id + '_oauth_token'] != '': + return session[self.provider_id + '_oauth_token'] + # check cache + cache_key = self.make_cache_key(blueprint=blueprint, user=user, user_id=user_id) + token = self.cache.get(cache_key) + if token: return token - def set(self, blueprint, token, user=None, user_id=None): - uid = first([user_id, self.user_id, blueprint.config.get("user_id")]) - u = first(_get_real_user(ref, self.anon_user) - for ref in (user, self.user, blueprint.config.get("user"))) + # if not cached, make database queries + query = ( + self.session.query(self.model) + .filter_by(provider=self.provider_id) + ) + uid = first([user_id, self.user_id, blueprint.config.get("user_id")]) + u = first(_get_real_user(ref, self.anon_user) + for ref in (user, self.user, blueprint.config.get("user"))) - if self.user_required and not u and not uid: - raise ValueError("Cannot set OAuth token without an associated user") + use_provider_user_id = False + if self.provider_id + '_oauth_user_id' in session and session[self.provider_id + '_oauth_user_id'] != '': + query = query.filter_by(provider_user_id=session[self.provider_id + '_oauth_user_id']) + use_provider_user_id = True - # if there was an existing model, delete it - existing_query = ( - self.session.query(self.model) - .filter_by(provider=self.provider_id) - ) - # check for user ID - has_user_id = hasattr(self.model, "user_id") - if has_user_id and uid: - existing_query = existing_query.filter_by(user_id=uid) - # check for user (relationship property) - has_user = hasattr(self.model, "user") - if has_user and u: - existing_query = existing_query.filter_by(user=u) - # queue up delete query -- won't be run until commit() - existing_query.delete() - # create a new model for this token - kwargs = { - "provider": self.provider_id, - "token": token, - } - if has_user_id and uid: - kwargs["user_id"] = uid - if has_user and u: - kwargs["user"] = u - self.session.add(self.model(**kwargs)) - # commit to delete and add simultaneously - self.session.commit() - # invalidate cache - self.cache.delete(self.make_cache_key( - blueprint=blueprint, user=user, user_id=user_id - )) + if self.user_required and not u and not uid and not use_provider_user_id: + # raise ValueError("Cannot get OAuth token without an associated user") + return None + # check for user ID + if hasattr(self.model, "user_id") and uid: + query = query.filter_by(user_id=uid) + # check for user (relationship property) + elif hasattr(self.model, "user") and u: + query = query.filter_by(user=u) + # if we have the property, but not value, filter by None + elif hasattr(self.model, "user_id"): + query = query.filter_by(user_id=None) + # run query + try: + token = query.one().token + except NoResultFound: + token = None - def delete(self, blueprint, user=None, user_id=None): - query = ( - self.session.query(self.model) - .filter_by(provider=self.provider_id) - ) - uid = first([user_id, self.user_id, blueprint.config.get("user_id")]) - u = first(_get_real_user(ref, self.anon_user) - for ref in (user, self.user, blueprint.config.get("user"))) + # cache the result + self.cache.set(cache_key, token) - if self.user_required and not u and not uid: - raise ValueError("Cannot delete OAuth token without an associated user") + return token - # check for user ID - if hasattr(self.model, "user_id") and uid: - query = query.filter_by(user_id=uid) - # check for user (relationship property) - elif hasattr(self.model, "user") and u: - query = query.filter_by(user=u) - # if we have the property, but not value, filter by None - elif hasattr(self.model, "user_id"): - query = query.filter_by(user_id=None) - # run query - query.delete() - self.session.commit() - # invalidate cache - self.cache.delete(self.make_cache_key( - blueprint=blueprint, user=user, user_id=user_id, - )) + def set(self, blueprint, token, user=None, user_id=None): + uid = first([user_id, self.user_id, blueprint.config.get("user_id")]) + u = first(_get_real_user(ref, self.anon_user) + for ref in (user, self.user, blueprint.config.get("user"))) -except Exception: - pass + if self.user_required and not u and not uid: + raise ValueError("Cannot set OAuth token without an associated user") + + # if there was an existing model, delete it + existing_query = ( + self.session.query(self.model) + .filter_by(provider=self.provider_id) + ) + # check for user ID + has_user_id = hasattr(self.model, "user_id") + if has_user_id and uid: + existing_query = existing_query.filter_by(user_id=uid) + # check for user (relationship property) + has_user = hasattr(self.model, "user") + if has_user and u: + existing_query = existing_query.filter_by(user=u) + # queue up delete query -- won't be run until commit() + existing_query.delete() + # create a new model for this token + kwargs = { + "provider": self.provider_id, + "token": token, + } + if has_user_id and uid: + kwargs["user_id"] = uid + if has_user and u: + kwargs["user"] = u + self.session.add(self.model(**kwargs)) + # commit to delete and add simultaneously + self.session.commit() + # invalidate cache + self.cache.delete(self.make_cache_key( + blueprint=blueprint, user=user, user_id=user_id + )) + + def delete(self, blueprint, user=None, user_id=None): + query = ( + self.session.query(self.model) + .filter_by(provider=self.provider_id) + ) + uid = first([user_id, self.user_id, blueprint.config.get("user_id")]) + u = first(_get_real_user(ref, self.anon_user) + for ref in (user, self.user, blueprint.config.get("user"))) + + if self.user_required and not u and not uid: + raise ValueError("Cannot delete OAuth token without an associated user") + + # check for user ID + if hasattr(self.model, "user_id") and uid: + query = query.filter_by(user_id=uid) + # check for user (relationship property) + elif hasattr(self.model, "user") and u: + query = query.filter_by(user=u) + # if we have the property, but not value, filter by None + elif hasattr(self.model, "user_id"): + query = query.filter_by(user_id=None) + # run query + query.delete() + self.session.commit() + # invalidate cache + self.cache.delete(self.make_cache_key( + blueprint=blueprint, user=user, user_id=user_id, + )) diff --git a/cps/oauth_bb.py b/cps/oauth_bb.py index 4d489cdd..3fee2e04 100644 --- a/cps/oauth_bb.py +++ b/cps/oauth_bb.py @@ -30,12 +30,15 @@ from flask_babel import gettext as _ from flask_dance.consumer import oauth_authorized, oauth_error from flask_dance.contrib.github import make_github_blueprint, github from flask_dance.contrib.google import make_google_blueprint, google -from flask_login import login_user, current_user +from flask_login import login_user, current_user, login_required from sqlalchemy.orm.exc import NoResultFound from . import constants, logger, config, app, ub -from .web import login_required -from .oauth import OAuthBackend, backend_resultcode + +try: + from .oauth import OAuthBackend, backend_resultcode +except NameError: + pass oauth_check = {} @@ -84,11 +87,7 @@ def register_user_with_oauth(user=None): except NoResultFound: # no found, return error return - try: - ub.session.commit() - except Exception as e: - log.exception(e) - ub.session.rollback() + ub.session_commit("User {} with OAuth for provider {} registered".format(user.nickname, oauth_key)) def logout_oauth_user(): @@ -100,16 +99,12 @@ def logout_oauth_user(): if ub.oauth_support: oauthblueprints = [] if not ub.session.query(ub.OAuthProvider).count(): - oauthProvider = ub.OAuthProvider() - oauthProvider.provider_name = "github" - oauthProvider.active = False - ub.session.add(oauthProvider) - ub.session.commit() - oauthProvider = ub.OAuthProvider() - oauthProvider.provider_name = "google" - oauthProvider.active = False - ub.session.add(oauthProvider) - ub.session.commit() + for provider in ("github", "google"): + oauthProvider = ub.OAuthProvider() + oauthProvider.provider_name = provider + oauthProvider.active = False + ub.session.add(oauthProvider) + ub.session_commit("{} Blueprint Created".format(provider)) oauth_ids = ub.session.query(ub.OAuthProvider).all() ele1 = dict(provider_name='github', @@ -199,12 +194,8 @@ if ub.oauth_support: provider_user_id=provider_user_id, token=token, ) - try: - ub.session.add(oauth_entry) - ub.session.commit() - except Exception as e: - log.exception(e) - ub.session.rollback() + ub.session.add(oauth_entry) + ub.session_commit() # Disable Flask-Dance's default behavior for saving the OAuth token # Value differrs depending on flask-dance version @@ -235,7 +226,7 @@ if ub.oauth_support: flash(_(u"Link to %(oauth)s Succeeded", oauth=provider_name), category="success") return redirect(url_for('web.profile')) except Exception as e: - log.exception(e) + log.debug_or_exception(e) ub.session.rollback() else: flash(_(u"Login failed, No User Linked With OAuth Account"), category="error") @@ -282,7 +273,7 @@ if ub.oauth_support: logout_oauth_user() flash(_(u"Unlink to %(oauth)s Succeeded", oauth=oauth_check[provider]), category="success") except Exception as e: - log.exception(e) + log.debug_or_exception(e) ub.session.rollback() flash(_(u"Unlink to %(oauth)s Failed", oauth=oauth_check[provider]), category="error") except NoResultFound: diff --git a/cps/opds.py b/cps/opds.py index 78d5d8ed..c66ee836 100644 --- a/cps/opds.py +++ b/cps/opds.py @@ -25,7 +25,7 @@ import sys import datetime from functools import wraps -from flask import Blueprint, request, render_template, Response, g, make_response +from flask import Blueprint, request, render_template, Response, g, make_response, abort from flask_login import current_user from sqlalchemy.sql.expression import func, text, or_, and_ from werkzeug.security import check_password_hash @@ -33,7 +33,8 @@ from werkzeug.security import check_password_hash from . import constants, logger, config, db, calibre_db, ub, services, get_locale, isoLanguages from .helper import get_download_link, get_book_cover from .pagination import Pagination -from .web import render_read_books, download_required +from .web import render_read_books +from .usermanagement import load_user_from_request from flask_babel import gettext as _ from babel import Locale as LC from babel.core import UnknownLocaleError @@ -51,7 +52,7 @@ def requires_basic_auth_if_no_ano(f): if not auth or auth.type != 'basic' or not check_auth(auth.username, auth.password): return authenticate() return f(*args, **kwargs) - if config.config_login_type == constants.LOGIN_LDAP and services.ldap: + if config.config_login_type == constants.LOGIN_LDAP and services.ldap and config.config_anonbrowse != 1: return services.ldap.basic_auth_required(f) return decorated @@ -100,7 +101,7 @@ def feed_normal_search(): @requires_basic_auth_if_no_ano def feed_new(): off = request.args.get("offset") or 0 - entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), + entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0, db.Books, True, [db.Books.timestamp.desc()]) return render_xml_template('feed.xml', entries=entries, pagination=pagination) @@ -118,7 +119,7 @@ def feed_discover(): @requires_basic_auth_if_no_ano def feed_best_rated(): off = request.args.get("offset") or 0 - entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), + entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0, db.Books, db.Books.ratings.any(db.Ratings.rating > 9), [db.Books.timestamp.desc()]) return render_xml_template('feed.xml', entries=entries, pagination=pagination) @@ -164,7 +165,7 @@ def feed_authorindex(): @requires_basic_auth_if_no_ano def feed_author(book_id): off = request.args.get("offset") or 0 - entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), + entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0, db.Books, db.Books.authors.any(db.Authors.id == book_id), [db.Books.timestamp.desc()]) @@ -190,7 +191,7 @@ def feed_publisherindex(): @requires_basic_auth_if_no_ano def feed_publisher(book_id): off = request.args.get("offset") or 0 - entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), + entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0, db.Books, db.Books.publishers.any(db.Publishers.id == book_id), [db.Books.timestamp.desc()]) @@ -218,7 +219,7 @@ def feed_categoryindex(): @requires_basic_auth_if_no_ano def feed_category(book_id): off = request.args.get("offset") or 0 - entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), + entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0, db.Books, db.Books.tags.any(db.Tags.id == book_id), [db.Books.timestamp.desc()]) @@ -245,7 +246,7 @@ def feed_seriesindex(): @requires_basic_auth_if_no_ano def feed_series(book_id): off = request.args.get("offset") or 0 - entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), + entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0, db.Books, db.Books.series.any(db.Series.id == book_id), [db.Books.series_index]) @@ -276,7 +277,7 @@ def feed_ratingindex(): @requires_basic_auth_if_no_ano def feed_ratings(book_id): off = request.args.get("offset") or 0 - entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), + entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0, db.Books, db.Books.ratings.any(db.Ratings.id == book_id), [db.Books.timestamp.desc()]) @@ -304,7 +305,7 @@ def feed_formatindex(): @requires_basic_auth_if_no_ano def feed_format(book_id): off = request.args.get("offset") or 0 - entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), + entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0, db.Books, db.Books.data.any(db.Data.format == book_id.upper()), [db.Books.timestamp.desc()]) @@ -338,7 +339,7 @@ def feed_languagesindex(): @requires_basic_auth_if_no_ano def feed_languages(book_id): off = request.args.get("offset") or 0 - entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), + entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0, db.Books, db.Books.languages.any(db.Languages.id == book_id), [db.Books.timestamp.desc()]) @@ -383,8 +384,13 @@ def feed_shelf(book_id): @opds.route("/opds/download///") @requires_basic_auth_if_no_ano -@download_required def opds_download_link(book_id, book_format): + # I gave up with this: With enabled ldap login, the user doesn't get logged in, therefore it's always guest + # workaround, loading the user from the request and checking it's download rights here + # in case of anonymous browsing user is None + user = load_user_from_request(request) or current_user + if not user.role_download(): + return abort(403) if "Kobo" in request.headers.get('User-Agent'): client = "kobo" else: @@ -408,7 +414,7 @@ def get_metadata_calibre_companion(uuid, library): def feed_search(term): if term: - entries = calibre_db.get_search_results(term) + entries, __, ___ = calibre_db.get_search_results(term) entriescount = len(entries) if len(entries) > 0 else 1 pagination = Pagination(1, entriescount, entriescount) return render_xml_template('feed.xml', searchterm=term, entries=entries, pagination=pagination) @@ -418,10 +424,18 @@ def feed_search(term): def check_auth(username, password): if sys.version_info.major == 3: - username = username.encode('windows-1252') + try: + username = username.encode('windows-1252') + except UnicodeEncodeError: + username = username.encode('utf-8') user = ub.session.query(ub.User).filter(func.lower(ub.User.nickname) == username.decode('utf-8').lower()).first() - return bool(user and check_password_hash(str(user.password), password)) + if bool(user and check_password_hash(str(user.password), password)): + return True + else: + ipAdress = request.headers.get('X-Forwarded-For', request.remote_addr) + log.warning('OPDS Login failed for user "%s" IP-address: %s', username.decode('utf-8'), ipAdress) + return False def authenticate(): diff --git a/cps/remotelogin.py b/cps/remotelogin.py new file mode 100644 index 00000000..d9e7388f --- /dev/null +++ b/cps/remotelogin.py @@ -0,0 +1,138 @@ +# -*- coding: utf-8 -*- + +# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web) +# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11, +# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh, +# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe, +# ruben-herold, marblepebble, JackED42, SiphonSquirrel, +# apetresc, nanu-c, mutschler +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +import json +from datetime import datetime + +from flask import Blueprint, request, make_response, abort, url_for, flash, redirect +from flask_login import login_required, current_user, login_user +from flask_babel import gettext as _ +from sqlalchemy.sql.expression import true + +from . import config, logger, ub +from .render_template import render_title_template + +try: + from functools import wraps +except ImportError: + pass # We're not using Python 3 + +remotelogin = Blueprint('remotelogin', __name__) +log = logger.create() + + +def remote_login_required(f): + @wraps(f) + def inner(*args, **kwargs): + if config.config_remote_login: + return f(*args, **kwargs) + if request.headers.get('X-Requested-With') == 'XMLHttpRequest': + data = {'status': 'error', 'message': 'Forbidden'} + response = make_response(json.dumps(data, ensure_ascii=False)) + response.headers["Content-Type"] = "application/json; charset=utf-8" + return response, 403 + abort(403) + + return inner + +@remotelogin.route('/remote/login') +@remote_login_required +def remote_login(): + auth_token = ub.RemoteAuthToken() + ub.session.add(auth_token) + ub.session_commit() + verify_url = url_for('remotelogin.verify_token', token=auth_token.auth_token, _external=true) + log.debug(u"Remot Login request with token: %s", auth_token.auth_token) + return render_title_template('remote_login.html', title=_(u"login"), token=auth_token.auth_token, + verify_url=verify_url, page="remotelogin") + + +@remotelogin.route('/verify/') +@remote_login_required +@login_required +def verify_token(token): + auth_token = ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.auth_token == token).first() + + # Token not found + if auth_token is None: + flash(_(u"Token not found"), category="error") + log.error(u"Remote Login token not found") + return redirect(url_for('web.index')) + + # Token expired + elif datetime.now() > auth_token.expiration: + ub.session.delete(auth_token) + ub.session_commit() + + flash(_(u"Token has expired"), category="error") + log.error(u"Remote Login token expired") + return redirect(url_for('web.index')) + + # Update token with user information + auth_token.user_id = current_user.id + auth_token.verified = True + ub.session_commit() + + flash(_(u"Success! Please return to your device"), category="success") + log.debug(u"Remote Login token for userid %s verified", auth_token.user_id) + return redirect(url_for('web.index')) + + +@remotelogin.route('/ajax/verify_token', methods=['POST']) +@remote_login_required +def token_verified(): + token = request.form['token'] + auth_token = ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.auth_token == token).first() + + data = {} + + # Token not found + if auth_token is None: + data['status'] = 'error' + data['message'] = _(u"Token not found") + + # Token expired + elif datetime.now() > auth_token.expiration: + ub.session.delete(auth_token) + ub.session_commit() + + data['status'] = 'error' + data['message'] = _(u"Token has expired") + + elif not auth_token.verified: + data['status'] = 'not_verified' + + else: + user = ub.session.query(ub.User).filter(ub.User.id == auth_token.user_id).first() + login_user(user) + + ub.session.delete(auth_token) + ub.session_commit("User {} logged in via remotelogin, token deleted".format(user.nickname)) + + data['status'] = 'success' + log.debug(u"Remote Login for userid %s succeded", user.id) + flash(_(u"you are now logged in as: '%(nickname)s'", nickname=user.nickname), category="success") + + response = make_response(json.dumps(data, ensure_ascii=False)) + response.headers["Content-Type"] = "application/json; charset=utf-8" + + return response diff --git a/cps/render_template.py b/cps/render_template.py new file mode 100644 index 00000000..fdd8abb7 --- /dev/null +++ b/cps/render_template.py @@ -0,0 +1,116 @@ +# -*- coding: utf-8 -*- + +# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web) +# Copyright (C) 2018-2020 OzzieIsaacs +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +from flask import render_template +from flask_babel import gettext as _ +from flask import g +from werkzeug.local import LocalProxy +from flask_login import current_user + +from . import config, constants, ub, logger, db, calibre_db +from .ub import User + + +log = logger.create() + +def get_sidebar_config(kwargs=None): + kwargs = kwargs or [] + if 'content' in kwargs: + content = kwargs['content'] + content = isinstance(content, (User, LocalProxy)) and not content.role_anonymous() + else: + content = 'conf' in kwargs + sidebar = list() + sidebar.append({"glyph": "glyphicon-book", "text": _('Books'), "link": 'web.index', "id": "new", + "visibility": constants.SIDEBAR_RECENT, 'public': True, "page": "root", + "show_text": _('Show recent books'), "config_show":False}) + sidebar.append({"glyph": "glyphicon-fire", "text": _('Hot Books'), "link": 'web.books_list', "id": "hot", + "visibility": constants.SIDEBAR_HOT, 'public': True, "page": "hot", + "show_text": _('Show Hot Books'), "config_show": True}) + sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.books_list', + "id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous), + "page": "download", "show_text": _('Show Downloaded Books'), + "config_show": content}) + sidebar.append( + {"glyph": "glyphicon-star", "text": _('Top Rated Books'), "link": 'web.books_list', "id": "rated", + "visibility": constants.SIDEBAR_BEST_RATED, 'public': True, "page": "rated", + "show_text": _('Show Top Rated Books'), "config_show": True}) + sidebar.append({"glyph": "glyphicon-eye-open", "text": _('Read Books'), "link": 'web.books_list', "id": "read", + "visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not g.user.is_anonymous), + "page": "read", "show_text": _('Show read and unread'), "config_show": content}) + sidebar.append( + {"glyph": "glyphicon-eye-close", "text": _('Unread Books'), "link": 'web.books_list', "id": "unread", + "visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not g.user.is_anonymous), "page": "unread", + "show_text": _('Show unread'), "config_show": False}) + sidebar.append({"glyph": "glyphicon-random", "text": _('Discover'), "link": 'web.books_list', "id": "rand", + "visibility": constants.SIDEBAR_RANDOM, 'public': True, "page": "discover", + "show_text": _('Show random books'), "config_show": True}) + sidebar.append({"glyph": "glyphicon-inbox", "text": _('Categories'), "link": 'web.category_list', "id": "cat", + "visibility": constants.SIDEBAR_CATEGORY, 'public': True, "page": "category", + "show_text": _('Show category selection'), "config_show": True}) + sidebar.append({"glyph": "glyphicon-bookmark", "text": _('Series'), "link": 'web.series_list', "id": "serie", + "visibility": constants.SIDEBAR_SERIES, 'public': True, "page": "series", + "show_text": _('Show series selection'), "config_show": True}) + sidebar.append({"glyph": "glyphicon-user", "text": _('Authors'), "link": 'web.author_list', "id": "author", + "visibility": constants.SIDEBAR_AUTHOR, 'public': True, "page": "author", + "show_text": _('Show author selection'), "config_show": True}) + sidebar.append( + {"glyph": "glyphicon-text-size", "text": _('Publishers'), "link": 'web.publisher_list', "id": "publisher", + "visibility": constants.SIDEBAR_PUBLISHER, 'public': True, "page": "publisher", + "show_text": _('Show publisher selection'), "config_show":True}) + sidebar.append({"glyph": "glyphicon-flag", "text": _('Languages'), "link": 'web.language_overview', "id": "lang", + "visibility": constants.SIDEBAR_LANGUAGE, 'public': (g.user.filter_language() == 'all'), + "page": "language", + "show_text": _('Show language selection'), "config_show": True}) + sidebar.append({"glyph": "glyphicon-star-empty", "text": _('Ratings'), "link": 'web.ratings_list', "id": "rate", + "visibility": constants.SIDEBAR_RATING, 'public': True, + "page": "rating", "show_text": _('Show ratings selection'), "config_show": True}) + sidebar.append({"glyph": "glyphicon-file", "text": _('File formats'), "link": 'web.formats_list', "id": "format", + "visibility": constants.SIDEBAR_FORMAT, 'public': True, + "page": "format", "show_text": _('Show file formats selection'), "config_show": True}) + sidebar.append( + {"glyph": "glyphicon-trash", "text": _('Archived Books'), "link": 'web.books_list', "id": "archived", + "visibility": constants.SIDEBAR_ARCHIVED, 'public': (not g.user.is_anonymous), "page": "archived", + "show_text": _('Show archived books'), "config_show": content}) + sidebar.append( + {"glyph": "glyphicon-th-list", "text": _('Books List'), "link": 'web.books_table', "id": "list", + "visibility": constants.SIDEBAR_LIST, 'public': (not g.user.is_anonymous), "page": "list", + "show_text": _('Show Books List'), "config_show": content}) + + return sidebar + +def get_readbooks_ids(): + if not config.config_read_column: + readBooks = ub.session.query(ub.ReadBook).filter(ub.ReadBook.user_id == int(current_user.id))\ + .filter(ub.ReadBook.read_status == ub.ReadBook.STATUS_FINISHED).all() + return frozenset([x.book_id for x in readBooks]) + else: + try: + readBooks = calibre_db.session.query(db.cc_classes[config.config_read_column])\ + .filter(db.cc_classes[config.config_read_column].value == True).all() + return frozenset([x.book for x in readBooks]) + except (KeyError, AttributeError): + log.error("Custom Column No.%d is not existing in calibre database", config.config_read_column) + return [] + +# Returns the template for rendering and includes the instance name +def render_title_template(*args, **kwargs): + sidebar = get_sidebar_config(kwargs) + return render_template(instance=config.config_calibre_web_title, sidebar=sidebar, + accept=constants.EXTENSIONS_UPLOAD, read_book_ids=get_readbooks_ids(), + *args, **kwargs) diff --git a/cps/server.py b/cps/server.py index 7c2d321d..ac821b31 100644 --- a/cps/server.py +++ b/cps/server.py @@ -22,11 +22,14 @@ import os import errno import signal import socket +import subprocess try: from gevent.pywsgi import WSGIServer from gevent.pool import Pool from gevent import __version__ as _version + from greenlet import GreenletExit + import ssl VERSION = 'Gevent ' + _version _GEVENT = True except ImportError: @@ -134,6 +137,64 @@ class WebServer(object): return sock, _readable_listen_address(*address) + @staticmethod + def _get_args_for_reloading(): + """Determine how the script was executed, and return the args needed + to execute it again in a new process. + Code from https://github.com/pyload/pyload. Author GammaC0de, voulter + """ + rv = [sys.executable] + py_script = sys.argv[0] + args = sys.argv[1:] + # Need to look at main module to determine how it was executed. + __main__ = sys.modules["__main__"] + + # The value of __package__ indicates how Python was called. It may + # not exist if a setuptools script is installed as an egg. It may be + # set incorrectly for entry points created with pip on Windows. + if getattr(__main__, "__package__", None) is None or ( + os.name == "nt" + and __main__.__package__ == "" + and not os.path.exists(py_script) + and os.path.exists("{}.exe".format(py_script)) + ): + # Executed a file, like "python app.py". + py_script = os.path.abspath(py_script) + + if os.name == "nt": + # Windows entry points have ".exe" extension and should be + # called directly. + if not os.path.exists(py_script) and os.path.exists("{}.exe".format(py_script)): + py_script += ".exe" + + if ( + os.path.splitext(sys.executable)[1] == ".exe" + and os.path.splitext(py_script)[1] == ".exe" + ): + rv.pop(0) + + rv.append(py_script) + else: + # Executed a module, like "python -m module". + if sys.argv[0] == "-m": + args = sys.argv + else: + if os.path.isfile(py_script): + # Rewritten by Python from "-m script" to "/path/to/script.py". + py_module = __main__.__package__ + name = os.path.splitext(os.path.basename(py_script))[0] + + if name != "__main__": + py_module += ".{}".format(name) + else: + # Incorrectly rewritten by pydevd debugger from "-m script" to "script". + py_module = py_script + + rv.extend(("-m", py_module.lstrip("."))) + + rv.extend(args) + return rv + def _start_gevent(self): ssl_args = self.ssl_args or {} @@ -143,6 +204,16 @@ class WebServer(object): output = _readable_listen_address(self.listen_address, self.listen_port) log.info('Starting Gevent server on %s', output) self.wsgiserver = WSGIServer(sock, self.app, log=self.access_logger, spawn=Pool(), **ssl_args) + if ssl_args: + wrap_socket = self.wsgiserver.wrap_socket + def my_wrap_socket(*args, **kwargs): + try: + return wrap_socket(*args, **kwargs) + except (ssl.SSLError, OSError) as ex: + log.warning('Gevent SSL Error: %s', ex) + raise GreenletExit + + self.wsgiserver.wrap_socket = my_wrap_socket self.wsgiserver.serve_forever() finally: if self.unix_socket_file: @@ -187,22 +258,16 @@ class WebServer(object): return True log.info("Performing restart of Calibre-Web") - arguments = list(sys.argv) - arguments.insert(0, sys.executable) - if os.name == 'nt': - arguments = ["\"%s\"" % a for a in arguments] - os.execv(sys.executable, arguments) + args = self._get_args_for_reloading() + subprocess.call(args, close_fds=True) return True - def _killServer(self, ignored_signum, ignored_frame): + def _killServer(self, __, ___): self.stop() def stop(self, restart=False): from . import updater_thread updater_thread.stop() - from . import calibre_db - calibre_db.stop() - log.info("webserver stop (restart=%s)", restart) self.restart = restart diff --git a/cps/services/SyncToken.py b/cps/services/SyncToken.py index f6db960b..da5d93a3 100644 --- a/cps/services/SyncToken.py +++ b/cps/services/SyncToken.py @@ -64,7 +64,7 @@ class SyncToken: books_last_modified: Datetime representing the last modified book that the device knows about. """ - SYNC_TOKEN_HEADER = "x-kobo-synctoken" + SYNC_TOKEN_HEADER = "x-kobo-synctoken" # nosec VERSION = "1-1-0" LAST_MODIFIED_ADDED_VERSION = "1-1-0" MIN_VERSION = "1-0-0" @@ -85,17 +85,19 @@ class SyncToken: "archive_last_modified": {"type": "string"}, "reading_state_last_modified": {"type": "string"}, "tags_last_modified": {"type": "string"}, + "books_last_id": {"type": "integer", "optional": True} }, } def __init__( self, - raw_kobo_store_token="", + raw_kobo_store_token="", # nosec books_last_created=datetime.min, books_last_modified=datetime.min, archive_last_modified=datetime.min, reading_state_last_modified=datetime.min, tags_last_modified=datetime.min, + books_last_id=-1 ): self.raw_kobo_store_token = raw_kobo_store_token self.books_last_created = books_last_created @@ -103,11 +105,12 @@ class SyncToken: self.archive_last_modified = archive_last_modified self.reading_state_last_modified = reading_state_last_modified self.tags_last_modified = tags_last_modified + self.books_last_id = books_last_id @staticmethod def from_headers(headers): sync_token_header = headers.get(SyncToken.SYNC_TOKEN_HEADER, "") - if sync_token_header == "": + if sync_token_header == "": # nosec return SyncToken() # On the first sync from a Kobo device, we may receive the SyncToken @@ -137,9 +140,12 @@ class SyncToken: archive_last_modified = get_datetime_from_json(data_json, "archive_last_modified") reading_state_last_modified = get_datetime_from_json(data_json, "reading_state_last_modified") tags_last_modified = get_datetime_from_json(data_json, "tags_last_modified") + books_last_id = data_json["books_last_id"] except TypeError: log.error("SyncToken timestamps don't parse to a datetime.") return SyncToken(raw_kobo_store_token=raw_kobo_store_token) + except KeyError: + books_last_id = -1 return SyncToken( raw_kobo_store_token=raw_kobo_store_token, @@ -147,7 +153,8 @@ class SyncToken: books_last_modified=books_last_modified, archive_last_modified=archive_last_modified, reading_state_last_modified=reading_state_last_modified, - tags_last_modified=tags_last_modified + tags_last_modified=tags_last_modified, + books_last_id=books_last_id ) def set_kobo_store_header(self, store_headers): @@ -170,7 +177,8 @@ class SyncToken: "books_last_created": to_epoch_timestamp(self.books_last_created), "archive_last_modified": to_epoch_timestamp(self.archive_last_modified), "reading_state_last_modified": to_epoch_timestamp(self.reading_state_last_modified), - "tags_last_modified": to_epoch_timestamp(self.tags_last_modified) + "tags_last_modified": to_epoch_timestamp(self.tags_last_modified), + "books_last_id":self.books_last_id }, } return b64encode_json(token) diff --git a/cps/services/goodreads_support.py b/cps/services/goodreads_support.py index 58255f3c..9312bc0f 100644 --- a/cps/services/goodreads_support.py +++ b/cps/services/goodreads_support.py @@ -72,7 +72,7 @@ def get_author_info(author_name): author_info = _client.find_author(author_name=author_name) except Exception as ex: # Skip goodreads, if site is down/inaccessible - log.warning('Goodreads website is down/inaccessible? %s', ex) + log.warning('Goodreads website is down/inaccessible? %s', ex.__str__()) return if author_info: diff --git a/cps/services/simpleldap.py b/cps/services/simpleldap.py index 336b0f2c..4125bdab 100644 --- a/cps/services/simpleldap.py +++ b/cps/services/simpleldap.py @@ -20,7 +20,7 @@ from __future__ import division, print_function, unicode_literals import base64 from flask_simpleldap import LDAP, LDAPException - +from flask_simpleldap import ldap as pyLDAP from .. import constants, logger try: @@ -38,6 +38,7 @@ def init_app(app, config): app.config['LDAP_HOST'] = config.config_ldap_provider_url app.config['LDAP_PORT'] = config.config_ldap_port + app.config['LDAP_CUSTOM_OPTIONS'] = {pyLDAP.OPT_REFERRALS: 0} if config.config_ldap_encryption == 2: app.config['LDAP_SCHEMA'] = 'ldaps' else: @@ -54,8 +55,14 @@ def init_app(app, config): app.config['LDAP_USERNAME'] = "" app.config['LDAP_PASSWORD'] = base64.b64decode("") if bool(config.config_ldap_cert_path): - app.config['LDAP_REQUIRE_CERT'] = True - app.config['LDAP_CERT_PATH'] = config.config_ldap_cert_path + app.config['LDAP_CUSTOM_OPTIONS'].update({ + pyLDAP.OPT_X_TLS_REQUIRE_CERT: pyLDAP.OPT_X_TLS_DEMAND, + pyLDAP.OPT_X_TLS_CACERTFILE: config.config_ldap_cacert_path, + pyLDAP.OPT_X_TLS_CERTFILE: config.config_ldap_cert_path, + pyLDAP.OPT_X_TLS_KEYFILE: config.config_ldap_key_path, + pyLDAP.OPT_X_TLS_NEWCTX: 0 + }) + app.config['LDAP_BASE_DN'] = config.config_ldap_dn app.config['LDAP_USER_OBJECT_FILTER'] = config.config_ldap_user_object @@ -67,12 +74,19 @@ def init_app(app, config): try: _ldap.init_app(app) + except ValueError: + if bool(config.config_ldap_cert_path): + app.config['LDAP_CUSTOM_OPTIONS'].pop(pyLDAP.OPT_X_TLS_NEWCTX) + try: + _ldap.init_app(app) + except RuntimeError as e: + log.error(e) except RuntimeError as e: log.error(e) -def get_object_details(user=None, group=None, query_filter=None, dn_only=False): - return _ldap.get_object_details(user, group, query_filter, dn_only) +def get_object_details(user=None,query_filter=None): + return _ldap.get_object_details(user, query_filter=query_filter) def bind(): @@ -103,7 +117,7 @@ def bind_user(username, password): return None, error except LDAPException as ex: if ex.message == 'Invalid credentials': - error = ("LDAP admin login failed") + error = "LDAP admin login failed" return None, error if ex.message == "Can't contact LDAP server": # log.warning('LDAP Server down: %s', ex) diff --git a/cps/services/worker.py b/cps/services/worker.py new file mode 100644 index 00000000..072674a0 --- /dev/null +++ b/cps/services/worker.py @@ -0,0 +1,219 @@ + +from __future__ import division, print_function, unicode_literals +import threading +import abc +import uuid +import time + +try: + import queue +except ImportError: + import Queue as queue +from datetime import datetime +from collections import namedtuple + +from cps import logger + +log = logger.create() + +# task 'status' consts +STAT_WAITING = 0 +STAT_FAIL = 1 +STAT_STARTED = 2 +STAT_FINISH_SUCCESS = 3 + +# Only retain this many tasks in dequeued list +TASK_CLEANUP_TRIGGER = 20 + +QueuedTask = namedtuple('QueuedTask', 'num, user, added, task') + + +def _get_main_thread(): + for t in threading.enumerate(): + if t.__class__.__name__ == '_MainThread': + return t + raise Exception("main thread not found?!") + + + +class ImprovedQueue(queue.Queue): + def to_list(self): + """ + Returns a copy of all items in the queue without removing them. + """ + + with self.mutex: + return list(self.queue) + +#Class for all worker tasks in the background +class WorkerThread(threading.Thread): + _instance = None + + @classmethod + def getInstance(cls): + if cls._instance is None: + cls._instance = WorkerThread() + return cls._instance + + def __init__(self): + threading.Thread.__init__(self) + + self.dequeued = list() + + self.doLock = threading.Lock() + self.queue = ImprovedQueue() + self.num = 0 + self.start() + + @classmethod + def add(cls, user, task): + ins = cls.getInstance() + ins.num += 1 + ins.queue.put(QueuedTask( + num=ins.num, + user=user, + added=datetime.now(), + task=task, + )) + + @property + def tasks(self): + with self.doLock: + tasks = self.queue.to_list() + self.dequeued + return sorted(tasks, key=lambda x: x.num) + + def cleanup_tasks(self): + with self.doLock: + dead = [] + alive = [] + for x in self.dequeued: + (dead if x.task.dead else alive).append(x) + + # if the ones that we need to keep are within the trigger, do nothing else + delta = len(self.dequeued) - len(dead) + if delta > TASK_CLEANUP_TRIGGER: + ret = alive + else: + # otherwise, lop off the oldest dead tasks until we hit the target trigger + ret = sorted(dead, key=lambda x: x.task.end_time)[-TASK_CLEANUP_TRIGGER:] + alive + + self.dequeued = sorted(ret, key=lambda x: x.num) + + # Main thread loop starting the different tasks + def run(self): + main_thread = _get_main_thread() + while main_thread.is_alive(): + try: + # this blocks until something is available. This can cause issues when the main thread dies - this + # thread will remain alive. We implement a timeout to unblock every second which allows us to check if + # the main thread is still alive. + # We don't use a daemon here because we don't want the tasks to just be abruptly halted, leading to + # possible file / database corruption + item = self.queue.get(timeout=1) + except queue.Empty: + time.sleep(1) + continue + + with self.doLock: + # add to list so that in-progress tasks show up + self.dequeued.append(item) + + # once we hit our trigger, start cleaning up dead tasks + if len(self.dequeued) > TASK_CLEANUP_TRIGGER: + self.cleanup_tasks() + + # sometimes tasks (like Upload) don't actually have work to do and are created as already finished + if item.task.stat is STAT_WAITING: + # CalibreTask.start() should wrap all exceptions in it's own error handling + item.task.start(self) + + self.queue.task_done() + + +class CalibreTask: + __metaclass__ = abc.ABCMeta + + def __init__(self, message): + self._progress = 0 + self.stat = STAT_WAITING + self.error = None + self.start_time = None + self.end_time = None + self.message = message + self.id = uuid.uuid4() + + @abc.abstractmethod + def run(self, worker_thread): + """Provides the caller some human-readable name for this class""" + raise NotImplementedError + + @abc.abstractmethod + def name(self): + """Provides the caller some human-readable name for this class""" + raise NotImplementedError + + def start(self, *args): + self.start_time = datetime.now() + self.stat = STAT_STARTED + + # catch any unhandled exceptions in a task and automatically fail it + try: + self.run(*args) + except Exception as e: + self._handleError(str(e)) + log.debug_or_exception(e) + + self.end_time = datetime.now() + + @property + def stat(self): + return self._stat + + @stat.setter + def stat(self, x): + self._stat = x + + @property + def progress(self): + return self._progress + + @progress.setter + def progress(self, x): + if not 0 <= x <= 1: + raise ValueError("Task progress should within [0, 1] range") + self._progress = x + + @property + def error(self): + return self._error + + @error.setter + def error(self, x): + self._error = x + + @property + def runtime(self): + return (self.end_time or datetime.now()) - self.start_time + + @property + def dead(self): + """Determines whether or not this task can be garbage collected + + We have a separate dictating this because there may be certain tasks that want to override this + """ + # By default, we're good to clean a task if it's "Done" + return self.stat in (STAT_FINISH_SUCCESS, STAT_FAIL) + + @progress.setter + def progress(self, x): + # todo: throw error if outside of [0,1] + self._progress = x + + def _handleError(self, error_message): + self.stat = STAT_FAIL + self.progress = 1 + self.error = error_message + + def _handleSuccess(self): + self.stat = STAT_FINISH_SUCCESS + self.progress = 1 diff --git a/cps/shelf.py b/cps/shelf.py index 19a350d5..5c6037ac 100644 --- a/cps/shelf.py +++ b/cps/shelf.py @@ -22,15 +22,17 @@ from __future__ import division, print_function, unicode_literals from datetime import datetime +import sys from flask import Blueprint, request, flash, redirect, url_for from flask_babel import gettext as _ from flask_login import login_required, current_user -from sqlalchemy.sql.expression import func +from sqlalchemy.sql.expression import func, true from sqlalchemy.exc import OperationalError, InvalidRequestError -from . import logger, ub, searched_ids, calibre_db -from .web import login_required_if_no_ano, render_title_template +from . import logger, ub, calibre_db, db +from .render_template import render_title_template +from .usermanagement import login_required_if_no_ano shelf = Blueprint('shelf', __name__) @@ -124,32 +126,28 @@ def search_to_shelf(shelf_id): flash(_(u"You are not allowed to add a book to the the shelf: %(name)s", name=shelf.name), category="error") return redirect(url_for('web.index')) - if current_user.id in searched_ids and searched_ids[current_user.id]: + if current_user.id in ub.searched_ids and ub.searched_ids[current_user.id]: books_for_shelf = list() books_in_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id).all() if books_in_shelf: book_ids = list() for book_id in books_in_shelf: book_ids.append(book_id.book_id) - for searchid in searched_ids[current_user.id]: + for searchid in ub.searched_ids[current_user.id]: if searchid not in book_ids: books_for_shelf.append(searchid) else: - books_for_shelf = searched_ids[current_user.id] + books_for_shelf = ub.searched_ids[current_user.id] if not books_for_shelf: - log.error("Books are already part of %s", shelf) + log.error("Books are already part of %s", shelf.name) flash(_(u"Books are already part of the shelf: %(name)s", name=shelf.name), category="error") return redirect(url_for('web.index')) - maxOrder = ub.session.query(func.max(ub.BookShelf.order)).filter(ub.BookShelf.shelf == shelf_id).first() - if maxOrder[0] is None: - maxOrder = 0 - else: - maxOrder = maxOrder[0] + maxOrder = ub.session.query(func.max(ub.BookShelf.order)).filter(ub.BookShelf.shelf == shelf_id).first()[0] or 0 for book in books_for_shelf: - maxOrder = maxOrder + 1 + maxOrder += 1 shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book, order=maxOrder)) shelf.last_modified = datetime.utcnow() try: @@ -223,96 +221,76 @@ def remove_from_shelf(shelf_id, book_id): @login_required def create_shelf(): shelf = ub.Shelf() - if request.method == "POST": - to_save = request.form.to_dict() - if "is_public" in to_save: - shelf.is_public = 1 - shelf.name = to_save["title"] - shelf.user_id = int(current_user.id) - - is_shelf_name_unique = False - if shelf.is_public == 1: - is_shelf_name_unique = ub.session.query(ub.Shelf) \ - .filter((ub.Shelf.name == to_save["title"]) & (ub.Shelf.is_public == 1)) \ - .first() is None - - if not is_shelf_name_unique: - flash(_(u"A public shelf with the name '%(title)s' already exists.", title=to_save["title"]), - category="error") - else: - is_shelf_name_unique = ub.session.query(ub.Shelf) \ - .filter((ub.Shelf.name == to_save["title"]) & (ub.Shelf.is_public == 0) & - (ub.Shelf.user_id == int(current_user.id)))\ - .first() is None - - if not is_shelf_name_unique: - flash(_(u"A private shelf with the name '%(title)s' already exists.", title=to_save["title"]), - category="error") - - if is_shelf_name_unique: - try: - ub.session.add(shelf) - ub.session.commit() - flash(_(u"Shelf %(title)s created", title=to_save["title"]), category="success") - return redirect(url_for('shelf.show_shelf', shelf_id=shelf.id)) - except (OperationalError, InvalidRequestError): - ub.session.rollback() - flash(_(u"Settings DB is not Writeable"), category="error") - except Exception: - ub.session.rollback() - flash(_(u"There was an error"), category="error") - return render_title_template('shelf_edit.html', shelf=shelf, title=_(u"Create a Shelf"), page="shelfcreate") - else: - return render_title_template('shelf_edit.html', shelf=shelf, title=_(u"Create a Shelf"), page="shelfcreate") + return create_edit_shelf(shelf, title=_(u"Create a Shelf"), page="shelfcreate") @shelf.route("/shelf/edit/", methods=["GET", "POST"]) @login_required def edit_shelf(shelf_id): shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first() + return create_edit_shelf(shelf, title=_(u"Edit a shelf"), page="shelfedit", shelf_id=shelf_id) + + +# if shelf ID is set, we are editing a shelf +def create_edit_shelf(shelf, title, page, shelf_id=False): if request.method == "POST": to_save = request.form.to_dict() - - is_shelf_name_unique = False - if shelf.is_public == 1: - is_shelf_name_unique = ub.session.query(ub.Shelf) \ - .filter((ub.Shelf.name == to_save["title"]) & (ub.Shelf.is_public == 1)) \ - .filter(ub.Shelf.id != shelf_id) \ - .first() is None - - if not is_shelf_name_unique: - flash(_(u"A public shelf with the name '%(title)s' already exists.", title=to_save["title"]), - category="error") + if "is_public" in to_save: + shelf.is_public = 1 else: - is_shelf_name_unique = ub.session.query(ub.Shelf) \ - .filter((ub.Shelf.name == to_save["title"]) & (ub.Shelf.is_public == 0) & - (ub.Shelf.user_id == int(current_user.id)))\ - .filter(ub.Shelf.id != shelf_id)\ - .first() is None - - if not is_shelf_name_unique: - flash(_(u"A private shelf with the name '%(title)s' already exists.", title=to_save["title"]), - category="error") - - if is_shelf_name_unique: + shelf.is_public = 0 + if check_shelf_is_unique(shelf, to_save, shelf_id): shelf.name = to_save["title"] - shelf.last_modified = datetime.utcnow() - if "is_public" in to_save: - shelf.is_public = 1 + # shelf.last_modified = datetime.utcnow() + if not shelf_id: + shelf.user_id = int(current_user.id) + ub.session.add(shelf) + shelf_action = "created" + flash_text = _(u"Shelf %(title)s created", title=to_save["title"]) else: - shelf.is_public = 0 + shelf_action = "changed" + flash_text = _(u"Shelf %(title)s changed", title=to_save["title"]) try: ub.session.commit() - flash(_(u"Shelf %(title)s changed", title=to_save["title"]), category="success") - except (OperationalError, InvalidRequestError): + log.info(u"Shelf {} {}".format(to_save["title"], shelf_action)) + flash(flash_text, category="success") + return redirect(url_for('shelf.show_shelf', shelf_id=shelf.id)) + except (OperationalError, InvalidRequestError) as e: ub.session.rollback() + log.debug_or_exception(e) flash(_(u"Settings DB is not Writeable"), category="error") - except Exception: + except Exception as e: ub.session.rollback() + log.debug_or_exception(e) flash(_(u"There was an error"), category="error") - return render_title_template('shelf_edit.html', shelf=shelf, title=_(u"Edit a shelf"), page="shelfedit") + return render_title_template('shelf_edit.html', shelf=shelf, title=title, page=page) + + +def check_shelf_is_unique(shelf, to_save, shelf_id=False): + if shelf_id: + ident = ub.Shelf.id != shelf_id else: - return render_title_template('shelf_edit.html', shelf=shelf, title=_(u"Edit a shelf"), page="shelfedit") + ident = true() + if shelf.is_public == 1: + is_shelf_name_unique = ub.session.query(ub.Shelf) \ + .filter((ub.Shelf.name == to_save["title"]) & (ub.Shelf.is_public == 1)) \ + .filter(ident) \ + .first() is None + + if not is_shelf_name_unique: + flash(_(u"A public shelf with the name '%(title)s' already exists.", title=to_save["title"]), + category="error") + else: + is_shelf_name_unique = ub.session.query(ub.Shelf) \ + .filter((ub.Shelf.name == to_save["title"]) & (ub.Shelf.is_public == 0) & + (ub.Shelf.user_id == int(current_user.id))) \ + .filter(ident) \ + .first() is None + + if not is_shelf_name_unique: + flash(_(u"A private shelf with the name '%(title)s' already exists.", title=to_save["title"]), + category="error") + return is_shelf_name_unique def delete_shelf_helper(cur_shelf): @@ -322,9 +300,7 @@ def delete_shelf_helper(cur_shelf): ub.session.delete(cur_shelf) ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id).delete() ub.session.add(ub.ShelfArchive(uuid=cur_shelf.uuid, user_id=cur_shelf.user_id)) - ub.session.commit() - log.info("successfully deleted %s", cur_shelf) - + ub.session_commit("successfully deleted Shelf {}".format(cur_shelf.name)) @shelf.route("/shelf/delete/") @@ -333,44 +309,24 @@ def delete_shelf(shelf_id): cur_shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first() try: delete_shelf_helper(cur_shelf) - except (OperationalError, InvalidRequestError): + except InvalidRequestError: ub.session.rollback() flash(_(u"Settings DB is not Writeable"), category="error") return redirect(url_for('web.index')) -@shelf.route("/shelf/", defaults={'shelf_type': 1}) -@shelf.route("/shelf//") +@shelf.route("/simpleshelf/") @login_required_if_no_ano -def show_shelf(shelf_type, shelf_id): - shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first() +def show_simpleshelf(shelf_id): + return render_show_shelf(2, shelf_id, 1, None) - result = list() - # user is allowed to access shelf - if shelf and check_shelf_view_permissions(shelf): - page = "shelf.html" if shelf_type == 1 else 'shelfdown.html' - books_in_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id)\ - .order_by(ub.BookShelf.order.asc()).all() - for book in books_in_shelf: - cur_book = calibre_db.get_filtered_book(book.book_id) - if cur_book: - result.append(cur_book) - else: - cur_book = calibre_db.get_book(book.book_id) - if not cur_book: - log.info('Not existing book %s in %s deleted', book.book_id, shelf) - try: - ub.session.query(ub.BookShelf).filter(ub.BookShelf.book_id == book.book_id).delete() - ub.session.commit() - except (OperationalError, InvalidRequestError): - ub.session.rollback() - flash(_(u"Settings DB is not Writeable"), category="error") - return render_title_template(page, entries=result, title=_(u"Shelf: '%(name)s'", name=shelf.name), - shelf=shelf, page="shelf") - else: - flash(_(u"Error opening shelf. Shelf does not exist or is not accessible"), category="error") - return redirect(url_for("web.index")) +@shelf.route("/shelf/", defaults={"sort_param": "order", 'page': 1}) +@shelf.route("/shelf//", defaults={'page': 1}) +@shelf.route("/shelf///") +@login_required_if_no_ano +def show_shelf(shelf_id, sort_param, page): + return render_show_shelf(1, shelf_id, page, sort_param) @shelf.route("/shelf/order/", methods=["GET", "POST"]) @@ -394,22 +350,79 @@ def order_shelf(shelf_id): shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first() result = list() if shelf and check_shelf_view_permissions(shelf): - books_in_shelf2 = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id) \ - .order_by(ub.BookShelf.order.asc()).all() - for book in books_in_shelf2: - cur_book = calibre_db.get_filtered_book(book.book_id) - if cur_book: - result.append({'title': cur_book.title, - 'id': cur_book.id, - 'author': cur_book.authors, - 'series': cur_book.series, - 'series_index': cur_book.series_index}) - else: - cur_book = calibre_db.get_book(book.book_id) - result.append({'title': _('Hidden Book'), - 'id': cur_book.id, - 'author': [], - 'series': []}) + result = calibre_db.session.query(db.Books)\ + .join(ub.BookShelf,ub.BookShelf.book_id == db.Books.id , isouter=True) \ + .add_columns(calibre_db.common_filters().label("visible")) \ + .filter(ub.BookShelf.shelf == shelf_id).order_by(ub.BookShelf.order.asc()).all() return render_title_template('shelf_order.html', entries=result, title=_(u"Change order of Shelf: '%(name)s'", name=shelf.name), shelf=shelf, page="shelforder") + + +def change_shelf_order(shelf_id, order): + result = calibre_db.session.query(db.Books).join(ub.BookShelf,ub.BookShelf.book_id == db.Books.id)\ + .filter(ub.BookShelf.shelf == shelf_id).order_by(*order).all() + for index, entry in enumerate(result): + book = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id) \ + .filter(ub.BookShelf.book_id == entry.id).first() + book.order = index + ub.session_commit("Shelf-id:{} - Order changed".format(shelf_id)) + + +def render_show_shelf(shelf_type, shelf_id, page_no, sort_param): + shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first() + + # check user is allowed to access shelf + if shelf and check_shelf_view_permissions(shelf): + + if shelf_type == 1: + # order = [ub.BookShelf.order.asc()] + if sort_param == 'pubnew': + change_shelf_order(shelf_id, [db.Books.pubdate.desc()]) + if sort_param == 'pubold': + change_shelf_order(shelf_id, [db.Books.pubdate]) + if sort_param == 'abc': + change_shelf_order(shelf_id, [db.Books.sort]) + if sort_param == 'zyx': + change_shelf_order(shelf_id, [db.Books.sort.desc()]) + if sort_param == 'new': + change_shelf_order(shelf_id, [db.Books.timestamp.desc()]) + if sort_param == 'old': + change_shelf_order(shelf_id, [db.Books.timestamp]) + if sort_param == 'authaz': + change_shelf_order(shelf_id, [db.Books.author_sort.asc()]) + if sort_param == 'authza': + change_shelf_order(shelf_id, [db.Books.author_sort.desc()]) + page = "shelf.html" + pagesize = 0 + else: + pagesize = sys.maxsize + page = 'shelfdown.html' + + result, __, pagination = calibre_db.fill_indexpage(page_no, pagesize, + db.Books, + ub.BookShelf.shelf == shelf_id, + [ub.BookShelf.order.asc()], + ub.BookShelf,ub.BookShelf.book_id == db.Books.id) + # delete chelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web + wrong_entries = calibre_db.session.query(ub.BookShelf)\ + .join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True)\ + .filter(db.Books.id == None).all() + for entry in wrong_entries: + log.info('Not existing book {} in {} deleted'.format(entry.book_id, shelf)) + try: + ub.session.query(ub.BookShelf).filter(ub.BookShelf.book_id == entry.book_id).delete() + ub.session.commit() + except (OperationalError, InvalidRequestError): + ub.session.rollback() + flash(_(u"Settings DB is not Writeable"), category="error") + + return render_title_template(page, + entries=result, + pagination=pagination, + title=_(u"Shelf: '%(name)s'", name=shelf.name), + shelf=shelf, + page="shelf") + else: + flash(_(u"Error opening shelf. Shelf does not exist or is not accessible"), category="error") + return redirect(url_for("web.index")) diff --git a/cps/static/css/caliBlur.min.css b/cps/static/css/caliBlur.css similarity index 98% rename from cps/static/css/caliBlur.min.css rename to cps/static/css/caliBlur.css index 0714ea7c..ffa5ecca 100644 --- a/cps/static/css/caliBlur.min.css +++ b/cps/static/css/caliBlur.css @@ -240,7 +240,7 @@ body.blur .row-fluid .col-sm-10 { .col-sm-10 .book-meta > div.btn-toolbar:after { content: ''; - direction: block; + direction: ltr; position: fixed; top: 120px; right: 0; @@ -398,20 +398,17 @@ body.blur .row-fluid .col-sm-10 { .shelforder #sortTrue > div:hover { background-color: hsla(0, 0%, 100%, .06) !important; - cursor: move; cursor: grab; - cursor: -webkit-grab; color: #eee } .shelforder #sortTrue > div:active { cursor: grabbing; - cursor: -webkit-grabbing } .shelforder #sortTrue > div:before { content: "\EA53"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; margin-right: 30px; margin-left: 15px; vertical-align: bottom; @@ -446,7 +443,7 @@ body.blur .row-fluid .col-sm-10 { body.shelforder > div.container-fluid > div.row-fluid > div.col-sm-10:before { content: "\e155"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; line-height: 1; @@ -494,7 +491,7 @@ body.shelforder > div.container-fluid > div.row-fluid > div.col-sm-10:before { } #have_read_cb + label:before, #have_read_cb:checked + label:before { - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-size: 16px; height: 40px; width: 60px; @@ -550,13 +547,12 @@ body.shelforder > div.container-fluid > div.row-fluid > div.col-sm-10:before { height: 60px; width: 50px; cursor: pointer; - margin: 0; display: inline-block; - margin-top: -4px; + margin: -4px 0 0; } #archived_cb + label:before, #archived_cb:checked + label:before { - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-size: 16px; height: 40px; width: 60px; @@ -585,7 +581,7 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] > .dow border-left: 2px solid rgba(0, 0, 0, .15) } -div[aria-label="Edit/Delete book"] > .btn-warning { +div[aria-label="Edit/Delete book"] > .btn { width: 50px; height: 60px; margin: 0; @@ -600,7 +596,7 @@ div[aria-label="Edit/Delete book"] > .btn-warning { color: transparent } -div[aria-label="Edit/Delete book"] > .btn-warning > span { +div[aria-label="Edit/Delete book"] > .btn > span { visibility: visible; position: relative; display: inline-block; @@ -616,16 +612,16 @@ div[aria-label="Edit/Delete book"] > .btn-warning > span { margin: auto } -div[aria-label="Edit/Delete book"] > .btn-warning > span:before { +div[aria-label="Edit/Delete book"] > .btn > span:before { content: "\EA5d"; - font-family: plex-icons; + font-family: plex-icons, serif; font-size: 20px; padding: 16px 15px; display: inline-block; height: 60px } -div[aria-label="Edit/Delete book"] > .btn-warning > span:hover { +div[aria-label="Edit/Delete book"] > .btn > span:hover { color: #fff } @@ -637,7 +633,7 @@ div[aria-label="Edit/Delete book"] > .btn-warning > span:hover { color: var(--color-primary) } -.book { +/* .book { width: 225px; max-width: 225px; position: relative !important; @@ -648,7 +644,7 @@ div[aria-label="Edit/Delete book"] > .btn-warning > span:hover { transform: none !important; min-width: 225px; display: block -} +} */ #infscr-loading img, body > div.container-fluid > div > div.col-sm-10 > div.discover > div.isotope:after, body > div.container-fluid > div > div.col-sm-10 > div.discover > div.isotope:before { display: none @@ -760,7 +756,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > div.navbar-header > a .home-btn { color: hsla(0, 0%, 100%, .7); - line-height: 34.29px; + line-height: 34px; margin: 0; padding: 0; position: absolute; @@ -770,7 +766,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > div.navbar-header > a .home-btn > a { color: rgba(255, 255, 255, .7); - font-family: plex-icons-new; + font-family: plex-icons-new, serif; line-height: 60px; position: relative; text-align: center; @@ -800,7 +796,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > div.home-btn > a:hove .glyphicon-search:before { content: "\EA4F"; - font-family: plex-icons + font-family: plex-icons, serif } #nav_about:after, .profileDrop > span:after, .profileDrop > span:before { @@ -831,7 +827,10 @@ body:not(.read-frame) { font-family: Open Sans Semibold, Helvetica Neue, Helvetica, Arial, sans-serif; font-weight: 400; overflow: hidden; - margin: 0 + margin: 0; + /* scroll bar fix for firefox */ + scrollbar-color: hsla(0, 0%, 100%, .2) transparent; + scrollbar-width: thin; } body > div.navbar.navbar-default.navbar-static-top > div > form > div { @@ -963,7 +962,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > form.search-focus > d #form-upload .form-group .btn:before { content: "\e043"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; line-height: 1; @@ -988,7 +987,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > form.search-focus > d #form-upload .form-group .btn:after { content: "\EA13"; position: absolute; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-size: 8px; background: #3c444a; color: hsla(0, 0%, 100%, .7); @@ -1016,7 +1015,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > form.search-focus > d text-transform: none; font-weight: 400; font-style: normal; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; line-height: 1; @@ -1072,7 +1071,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > form.search-focus > d body > div.navbar.navbar-default.navbar-static-top > div > form > div > span > button:before { content: "\EA32"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; color: #eee; background: #555; font-size: 10px; @@ -1094,7 +1093,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > form > div > span > b body > div.navbar.navbar-default.navbar-static-top > div > form:before { content: "\EA4F"; display: block; - font-family: plex-icons; + font-family: plex-icons, serif; position: absolute; color: #eee; font-weight: 400; @@ -1117,7 +1116,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > form:before { body > div.navbar.navbar-default.navbar-static-top > div > form.search-focus > div > span.input-group-btn:before { content: "\EA4F"; display: block; - font-family: plex-icons; + font-family: plex-icons, serif; position: absolute; left: -298px; top: 8px; @@ -1190,7 +1189,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > div.navbar-collapse.c body > div.navbar.navbar-default.navbar-static-top > div > div.navbar-collapse.collapse > ul > li > #top_admin > .glyphicon-dashboard::before { content: "\EA31"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-size: 20px } @@ -1269,7 +1268,7 @@ body > div.container-fluid > div > div.col-sm-10 > div > form > a:hover { user-select: none } -.navigation li, .navigation li:not(ul>li) { +.navigation li, .navigation li:not(ul > li) { border-radius: 0 4px 4px 0 } @@ -1349,32 +1348,32 @@ body > div.container-fluid > div > div.col-sm-10 > div > form > a:hover { #nav_hot .glyphicon-fire::before { content: "\1F525"; - font-family: glyphicons regular + font-family: glyphicons regular, serif } .glyphicon-star:before { content: "\EA10"; - font-family: plex-icons-new + font-family: plex-icons-new, serif } #nav_rand .glyphicon-random::before { content: "\EA44"; - font-family: plex-icons-new + font-family: plex-icons-new, serif } .glyphicon-list::before { content: "\EA4D"; - font-family: plex-icons-new + font-family: plex-icons-new, serif } #nav_about .glyphicon-info-sign::before { content: "\EA26"; - font-family: plex-icons-new + font-family: plex-icons-new, serif } #nav_cat .glyphicon-inbox::before, .glyphicon-tags::before { content: "\E067"; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; margin-left: 2px } @@ -1420,7 +1419,7 @@ body > div.container-fluid > div > div.col-sm-10 > div > form > a:hover { .navigation .create-shelf a:before { content: "\EA13"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-size: 100%; padding-right: 10px; vertical-align: middle @@ -1458,7 +1457,8 @@ body > div.container-fluid > div > div.col-sm-10 > div > form > a:hover { -webkit-box-shadow: 0 0 2px rgba(0, 0, 0, .35); box-shadow: 0 0 2px rgba(0, 0, 0, .35); position: relative; - z-index: -9 + z-index: -9; + width: 150px !important; } #books > .cover > a, #books_rand > .cover > a, .book.isotope-item > .cover > a, body > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > form > div.col-sm-12 > div.col-sm-12 > div.col-sm-2 > a { @@ -1469,7 +1469,7 @@ body > div.container-fluid > div > div.col-sm-10 > div > form > a:hover { #books > .cover > a:before, #books_rand > .cover > a:before, .book.isotope-item > .cover > a:before, body > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > form > div.col-sm-12 > div.col-sm-12 > div.col-sm-2 > a:before { content: "\e352"; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; background: var(--color-secondary); border-radius: 50%; font-weight: 400; @@ -1517,8 +1517,8 @@ body > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > form top: 0; left: 0; opacity: 0; - background: -webkit-radial-gradient(50% 50%, farthest-corner, rgba(50, 50, 50, .5) 50%, #323232 100%); - background: -o-radial-gradient(50% 50%, farthest-corner, rgba(50, 50, 50, .5) 50%, #323232 100%); + background: -webkit-radial-gradient(farthest-corner at 50% 50%, rgba(50, 50, 50, .5) 50%, #323232 100%); + background: -o-radial-gradient(farthest-corner at 50% 50%, rgba(50, 50, 50, .5) 50%, #323232 100%); background: radial-gradient(farthest-corner at 50% 50%, rgba(50, 50, 50, .5) 50%, #323232 100%); z-index: -9 } @@ -1558,8 +1558,8 @@ body > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > form top: 0; left: 0; opacity: 0; - background: -webkit-radial-gradient(50% 50%, farthest-corner, rgba(50, 50, 50, .5) 50%, #323232 100%); - background: -o-radial-gradient(50% 50%, farthest-corner, rgba(50, 50, 50, .5) 50%, #323232 100%); + background: -webkit-radial-gradient(farthest-corner at 50% 50%, rgba(50, 50, 50, .5) 50%, #323232 100%); + background: -o-radial-gradient(farthest-corner at 50% 50%, rgba(50, 50, 50, .5) 50%, #323232 100%); background: radial-gradient(farthest-corner at 50% 50%, rgba(50, 50, 50, .5) 50%, #323232 100%) } @@ -1735,7 +1735,7 @@ body > div.container-fluid > div.row-fluid > div.col-sm-10 { body.me > div.container-fluid > div.row-fluid > div.col-sm-10:before { content: ''; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; font-size: 6vw; @@ -1939,15 +1939,18 @@ body > div.container-fluid > div > div.col-sm-10 > div.discover > form > .btn.bt z-index: 99999 } -.pagination:after, body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.next, body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.previous { +body > div.container-fluid > div > div.col-sm-10 > div.pagination .page-next > a, +body > div.container-fluid > div > div.col-sm-10 > div.pagination .page-previous > a +{ top: 0; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-weight: 100; -webkit-font-smoothing: antialiased; line-height: 60px; height: 60px; font-style: normal; - -moz-osx-font-smoothing: grayscale + -moz-osx-font-smoothing: grayscale; + overflow: hidden; } .pagination > a { @@ -1967,68 +1970,46 @@ body > div.container-fluid > div > div.col-sm-10 > div.discover > form > .btn.bt color: #fff !important } -body > div.container-fluid > div > div.col-sm-10 > div.pagination > a, body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.previous + a, body > div.container-fluid > div > div.col-sm-10 > div.pagination > a[href*=page] { +body > div.container-fluid > div > div.col-sm-10 > div.pagination > .page-item:not(.page-next):not(.page-previous) +{ display: none } -body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.next, body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.previous { +body > div.container-fluid > div > div.col-sm-10 > div.pagination > .page-next > a, +body > div.container-fluid > div > div.col-sm-10 > div.pagination > .page-previous > a { color: transparent; + background-color:transparent; margin-left: 0; width: 65px; padding: 0; font-size: 15px; - position: absolute; - display: block !important + display: block !important; + border: none; } -body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.next { - right: 0 -} - -body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.previous { - right: 65px -} - -body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.next:before { - content: "\EA32"; +body > div.container-fluid > div > div.col-sm-10 > div.pagination > .page-next > a:before, +body > div.container-fluid > div > div.col-sm-10 > div.pagination > .page-previous > a:before { visibility: visible; color: hsla(0, 0%, 100%, .35); height: 60px; line-height: 60px; border-left: 2px solid transparent; font-size: 20px; - padding: 20px 0 20px 20px; - margin-right: -27px + padding: 20px 25px; + margin-right: -27px; } -body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.previous:before { - content: "\EA33"; - visibility: visible; - color: hsla(0, 0%, 100%, .65); - height: 60px; - line-height: 60px; - font-size: 20px; - padding: 20px 25px -} - -body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.next:hover:before, body > div.container-fluid > div > div.col-sm-10 > div.pagination > a.previous:hover:before { - color: #fff -} - -.pagination > strong { - display: none -} - -.pagination:after { +body > div.container-fluid > div > div.col-sm-10 > div.pagination > .page-next > a:before { content: "\EA32"; - position: relative; - right: 0; - display: inline-block; - color: hsla(0, 0%, 100%, .55); - font-size: 20px; - padding: 0 23px; - margin-left: 20px; - z-index: -1 +} + +body > div.container-fluid > div > div.col-sm-10 > div.pagination > .page-previous > a:before { + content: "\EA33"; +} + +body > div.container-fluid > div > div.col-sm-10 > div.pagination > .page-next > a:hover:before, +body > div.container-fluid > div > div.col-sm-10 > div.pagination > .page-previous > a:hover:before { + color: #fff } .pagination > .ellipsis, .pagination > a:nth-last-of-type(2) { @@ -2041,7 +2022,7 @@ body.authorlist > div.container-fluid > div > div.col-sm-10 > div.container > di body.serieslist > div.container-fluid > div > div.col-sm-10:before { content: "\e044"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; line-height: 1; @@ -2138,15 +2119,14 @@ body > div.container-fluid > div > div.col-sm-10 > div.discover > div.container transition: all 0s } - -.book-meta > .bookinfo > .tags .btn-info, .well > form > .btn { +.well > form > .btn { vertical-align: middle; -o-transition: background-color .2s, color .2s } body.catlist > div.container-fluid > div.row-fluid > div.col-sm-10:before { content: "\E067"; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; font-style: normal; font-weight: 400; line-height: 1; @@ -2166,7 +2146,7 @@ body.catlist > div.container-fluid > div.row-fluid > div.col-sm-10:before { body.authorlist > div.container-fluid > div.row-fluid > div.col-sm-10:before, body.langlist > div.container-fluid > div > div.col-sm-10:before { - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; line-height: 1; @@ -2193,7 +2173,7 @@ body.langlist > div.container-fluid > div > div.col-sm-10:before { content: "\e034" } -body.advsearch > div.container-fluid > div > div.col-sm-10:before, body.authorlist > div.container-fluid > div > div.col-sm-10 > div.container:before, body.catlist > div.container-fluid > div > div.col-sm-10 > div.container:before, body.langlist > div.container-fluid > div > div.col-sm-10 > div.container:before, body.me > div.container-fluid > div > div.col-sm-10 > div.discover:before, body.serieslist > div.container-fluid > div > div.col-sm-10 > div.container:before { +body.authorlist > div.container-fluid > div > div.col-sm-10 > div.container:before, body.catlist > div.container-fluid > div > div.col-sm-10 > div.container:before, body.langlist > div.container-fluid > div > div.col-sm-10 > div.container:before, body.me > div.container-fluid > div > div.col-sm-10 > div.discover:before, body.serieslist > div.container-fluid > div > div.col-sm-10 > div.container:before { top: 60px; font-size: 24px; color: #eee; @@ -2263,10 +2243,14 @@ body.langlist > div.container-fluid > div > div.col-sm-10 > div.container:before content: "Languages" } -body.advsearch > div.container-fluid > div > div.col-sm-10:before { - content: "Advanced Search"; - margin-left: 20%; - left: 0 +body.advsearch > div.container-fluid > div > div.col-sm-10 > div.col-md-10.col-lg-6 { + padding: 15px 10px 15px 40px; +} + +@media screen and (max-width: 992px) { + body.advsearch > div.container-fluid > div > div.col-sm-10 > div.col-md-10.col-lg-6 { + padding-left: 20px; + } } body.me > div.container-fluid > div > div.col-sm-10 > div.discover:before { @@ -2503,7 +2487,6 @@ body > div.container-fluid > div > div.col-sm-10 > div.col-sm-8 > form > .btn.bt } textarea { - resize: none; resize: vertical } @@ -2849,7 +2832,7 @@ body > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-8 > form body.advanced_search > div.container-fluid > div > div.col-sm-10 > div.col-sm-8:before { content: "\EA4F"; - font-family: plex-icons; + font-family: plex-icons, serif; font-style: normal; font-weight: 400; line-height: 1; @@ -2949,7 +2932,6 @@ body > div.container-fluid > div > div.col-sm-10 > div > div > div.col-sm-3.col- #bookDetailsModal > .modal-dialog.modal-lg > .modal-content > .modal-body > div > div > div > div.col-sm-3.col-lg-3.col-xs-5 > div.cover, body > div.container-fluid > div > div.col-sm-10 > div > div > div.col-sm-3.col-lg-3.col-xs-5 > div.cover { margin: 0; width: 100%; - height: 100% } #bookDetailsModal > .modal-dialog.modal-lg > .modal-content > .modal-body > div > div > div > div.col-sm-3.col-lg-3.col-xs-5 > div.cover > img, body > div.container-fluid > div > div.col-sm-10 > div > div > div.col-sm-3.col-lg-3.col-xs-5 > div.cover > img { @@ -2974,46 +2956,35 @@ body > div.container-fluid > div > div.col-sm-10 > div > div > div.col-sm-3.col- margin-top: 24px } -.book-meta > .bookinfo > .publishers > span:first-of-type, .book-meta > .bookinfo > .publishing-date > span:first-of-type { +.book-meta > .bookinfo > .languages > span:first-of-type, +.book-meta > .bookinfo > .publishers > span:first-of-type, +.book-meta > .bookinfo > .publishing-date > span:first-of-type, +.real_custom_columns > span:first-of-type { color: hsla(0, 0%, 100%, .45); text-transform: uppercase; - font-family: Open Sans Bold, Helvetica Neue, Helvetica, Arial, sans-serif + font-family: Open Sans Bold, Helvetica Neue, Helvetica, Arial, sans-serif; + width: 200px; + display: inline-block } -.book-meta > .bookinfo > .publishers > span:last-of-type, .book-meta > .bookinfo > .publishing-date > span:last-of-type { +.book-meta > .bookinfo > .languages > span:last-of-type, +.book-meta > .bookinfo > .publishers > span:last-of-type, +.book-meta > .bookinfo > .publishing-date > span:last-of-type, +.real_custom_columns > span:last-of-type { font-family: Open Sans Semibold, Helvetica Neue, Helvetica, Arial, sans-serif; color: #fff; font-size: 15px; -webkit-font-smoothing: antialiased } -.book-meta > .bookinfo > .publishers > span:last-of-type { - padding-left: 90px +.book-meta > .bookinfo > .languages > span > a, +.book-meta > .bookinfo > .publishers > span > a, +.book-meta > .bookinfo > .publishing-date > span > a, +.real_custom_columns > span > a { + color: #fff } -.real_custom_columns > span:last-of-type { - padding-left: 90px -} - -.book-meta > .bookinfo > .publishing-date > span:last-of-type { - padding-left: 90px -} - -.book-meta > .bookinfo > .languages > span:first-of-type { - color: hsla(0, 0%, 100%, .45); - text-transform: uppercase; - font-family: Open Sans Bold, Helvetica Neue, Helvetica, Arial, sans-serif -} - -.book-meta > .bookinfo > .languages > span:last-of-type { - font-size: 15px; - font-family: Open Sans Semibold, Helvetica Neue, Helvetica, Arial, sans-serif; - -webkit-font-smoothing: antialiased; - color: #fff; - padding-left: 85px -} - -.book-meta > .bookinfo > .tags .btn-info, .book-meta > h2, body.book .author { +.book-meta > h2, body.book .author { font-family: Open Sans Bold, Helvetica Neue, Helvetica, Arial, sans-serif } @@ -3094,34 +3065,10 @@ body.book .author { background-color: rgba(0, 0, 0, .3) } -.book-meta > .bookinfo > .identifiers > p > .btn-success, .book-meta > .bookinfo > .tags .btn-info { - overflow: hidden; - text-align: center; - white-space: nowrap; - margin: 2px 3px 0 0; - padding: 0 10px -} - -.book-meta > .bookinfo > .tags .btn-info { - background-color: rgba(0, 0, 0, .15); - color: hsla(0, 0%, 100%, .7); - font-size: 13px; - display: inline-block; - border-radius: 4px; - -webkit-transition: background-color .2s, color .2s; - transition: background-color .2s, color .2s; - text-transform: none -} - .dropdown-menu, .tooltip.in { -webkit-transition: opacity .15s ease-out, -webkit-transform .15s cubic-bezier(.6, .4, .2, 1.4) } -.book-meta > .bookinfo > .tags .btn-info:hover { - color: #fff; - text-decoration: underline -} - .book-meta > .bookinfo > .identifiers, .book-meta > .bookinfo > .tags { padding-left: 40px; margin: 10px 0 @@ -3207,7 +3154,7 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] > div. #add-to-shelf > span.glyphicon.glyphicon-list:before { content: "\EA59"; - font-family: plex-icons; + font-family: plex-icons, serif; font-size: 18px } @@ -3219,7 +3166,7 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] > div. #read-in-browser > span.glyphicon-eye-open:before, .btn-toolbar > .btn-group > .btn-group > #btnGroupDrop2 > span.glyphicon-eye-open:before { content: "\e352"; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; font-size: 18px; padding-right: 5px } @@ -3231,7 +3178,7 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] > div. #btnGroupDrop1 > span.glyphicon-download:before { font-size: 20px; content: "\ea66"; - font-family: plex-icons + font-family: plex-icons, serif } .col-sm-10 .book-meta > div.btn-toolbar { @@ -3335,7 +3282,6 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropd -webkit-box-shadow: 0 4px 10px rgba(0, 0, 0, .35); box-shadow: 0 4px 10px rgba(0, 0, 0, .35); -o-transition: opacity .15s ease-out, transform .15s cubic-bezier(.6, .4, .2, 1.4); - transition: opacity .15s ease-out, transform .15s cubic-bezier(.6, .4, .2, 1.4); transition: opacity .15s ease-out, transform .15s cubic-bezier(.6, .4, .2, 1.4), -webkit-transform .15s cubic-bezier(.6, .4, .2, 1.4); -webkit-transform-origin: center top; -ms-transform-origin: center top; @@ -3363,7 +3309,8 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropd box-shadow: none } -.book-meta > .bookinfo > .identifiers > p > .btn-success { +.book-meta > .bookinfo .btn-info, +.book-meta > .bookinfo .btn-success { background-color: rgba(0, 0, 0, .15); color: hsla(0, 0%, 100%, .7); font-size: 13px; @@ -3377,11 +3324,21 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropd text-transform: none } -.book-meta > .bookinfo > .identifiers > p > .btn-success:hover { +.book-meta > .bookinfo .btn-info:hover, +.book-meta > .bookinfo .btn-success:hover { color: #fff; text-decoration: underline } +.book-meta > .bookinfo .btn-info, +.book-meta > .bookinfo .btn-success { + overflow: hidden; + text-align: center; + white-space: nowrap; + margin: 2px 3px 0 0; + padding: 0 10px +} + #bookDetailsModal .book-meta { color: hsla(0, 0%, 100%, .7); height: calc(100% - 120px); @@ -3453,7 +3410,7 @@ body > div.container-fluid > div > div.col-sm-10 > div.discover > .btn-primary:l .book-meta > div.more-stuff > .btn-toolbar > .btn-group[aria-label="Remove from shelves"] > a > .glyphicon-remove:before { content: "\ea64"; - font-family: plex-icons + font-family: plex-icons, serif } body > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > form > .col-sm-6 { @@ -3522,6 +3479,20 @@ body.shelf > div.container-fluid > div > div.col-sm-10 > div.discover > h2 { padding-right: 25px !important } +body.shelf-down > .discover > h2 { + color: inherit; + margin-bottom: 20px; +} + +body.shelf-down > .discover > .row > .book.col-sm-3.col-lg-2.col-xs-6 { + max-width: 225px !important; + width: 225px !important; +} + +body.shelf-down .btn-group button#btnGroupDrop1 { + height: 50px; +} + .author > .container-fluid > .row-fluid > .col-sm-10 > h2:before, .plexBack > a { -moz-text-size-adjust: 100%; -ms-text-size-adjust: 100%; @@ -3553,7 +3524,7 @@ body.shelf > div.container-fluid > div > div.col-sm-10 > div.discover > .shelf-b body.shelf > div.container-fluid > div > div.col-sm-10 > div.discover > .shelf-btn-group > [data-target="#DeleteShelfDialog"]:before { content: "\EA6D"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; position: absolute; color: hsla(0, 0%, 100%, .7); font-size: 20px; @@ -3583,7 +3554,7 @@ body.shelf > div.container-fluid > div > div.col-sm-10 > div.discover > .shelf-b body.shelf > div.container-fluid > div > div.col-sm-10 > div.discover > .shelf-btn-group > [href*=edit]:before { content: "\EA5d"; - font-family: plex-icons; + font-family: plex-icons, serif; position: absolute; color: hsla(0, 0%, 100%, .7); font-size: 20px; @@ -3613,7 +3584,7 @@ body.shelf > div.container-fluid > div > div.col-sm-10 > div.discover > .shelf-b body.shelf > div.container-fluid > div > div.col-sm-10 > div.discover > .shelf-btn-group > [href*=order]:before { content: "\E409"; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; position: absolute; color: hsla(0, 0%, 100%, .7); font-size: 20px; @@ -3750,7 +3721,7 @@ body > div.navbar.navbar-default.navbar-static-top > div > div.navbar-header > a .plexBack > a { color: rgba(255, 255, 255, .7); - font-family: plex-icons-new; + font-family: plex-icons-new, serif; -webkit-font-variant-ligatures: normal; font-variant-ligatures: normal; line-height: 60px; @@ -3862,11 +3833,9 @@ body.login > div.container-fluid > div.row-fluid > div.col-sm-10::before, body.l -webkit-transform: translateY(-50%); -ms-transform: translateY(-50%); transform: translateY(-50%); - border-style: solid; vertical-align: middle; -webkit-transition: border .2s, -webkit-transform .4s; -o-transition: border .2s, transform .4s; - transition: border .2s, transform .4s; transition: border .2s, transform .4s, -webkit-transform .4s; margin: 9px 6px } @@ -3885,11 +3854,9 @@ body.login > div.container-fluid > div.row-fluid > div.col-sm-10::before, body.l -webkit-transform: translateY(-50%); -ms-transform: translateY(-50%); transform: translateY(-50%); - border-style: solid; vertical-align: middle; -webkit-transition: border .2s, -webkit-transform .4s; -o-transition: border .2s, transform .4s; - transition: border .2s, transform .4s; transition: border .2s, transform .4s, -webkit-transform .4s; margin: 12px 6px } @@ -3969,7 +3936,7 @@ body.author img.bg-blur[src=undefined] { body.author:not(.authorlist) .undefined-img:before { content: "\e008"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; line-height: 1; @@ -4118,7 +4085,7 @@ body.shelf.modal-open > .container-fluid { font-size: 18px; color: #999; display: inline-block; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; font-style: normal; font-weight: 400 } @@ -4219,7 +4186,7 @@ body.shelf.modal-open > .container-fluid { #remove-from-shelves > .btn > span:before { content: "\EA52"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; color: transparent; padding-left: 5px } @@ -4231,7 +4198,7 @@ body.shelf.modal-open > .container-fluid { #remove-from-shelves > a:first-of-type:before { content: "\EA4D"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; position: absolute; color: hsla(0, 0%, 100%, .45); font-style: normal; @@ -4271,7 +4238,7 @@ body.shelf.modal-open > .container-fluid { content: "\E208"; padding-right: 10px; display: block; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; font-style: normal; font-weight: 400; position: absolute; @@ -4282,7 +4249,6 @@ body.shelf.modal-open > .container-fluid { opacity: .5; -webkit-transition: -webkit-transform .3s ease-out; -o-transition: transform .3s ease-out; - transition: transform .3s ease-out; transition: transform .3s ease-out, -webkit-transform .3s ease-out; -webkit-transform: translate(0, -60px); -ms-transform: translate(0, -60px); @@ -4342,7 +4308,7 @@ body.advanced_search > div.container-fluid > div > div.col-sm-10 > div.col-sm-8 .glyphicon-remove:before { content: "\EA52"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-weight: 400 } @@ -4428,7 +4394,7 @@ body.advanced_search > div.container-fluid > div.row-fluid > div.col-sm-10 > div body:not(.blur) #nav_new:before { content: "\EA4F"; - font-family: plex-icons; + font-family: plex-icons, serif; font-style: normal; font-weight: 400; line-height: 1; @@ -4454,7 +4420,7 @@ body.advanced_search > div.container-fluid > div.row-fluid > div.col-sm-10 > div color: hsla(0, 0%, 100%, .7); cursor: pointer; display: block; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-size: 20px; font-stretch: 100%; font-style: normal; @@ -4550,12 +4516,12 @@ body.admin > div.container-fluid > div > div.col-sm-10 > div.container-fluid > d } body.admin > div.container-fluid > div > div.col-sm-10 > div.container-fluid > div.row > div.col .table > tbody > tr > td, body.admin > div.container-fluid > div > div.col-sm-10 > div.container-fluid > div.row > div.col .table > tbody > tr > th, body.admin > div.container-fluid > div > div.col-sm-10 > div.discover > .table > tbody > tr > td, body.admin > div.container-fluid > div > div.col-sm-10 > div.discover > .table > tbody > tr > th { - border: collapse + border: collapse; } body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10::before, body.newuser.admin > div.container-fluid > div.row-fluid > div.col-sm-10::before { content: ''; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; font-size: 6vw; @@ -4659,7 +4625,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. content: "\e352"; display: inline-block; position: absolute; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; background: var(--color-secondary); color: #fff; border-radius: 50%; @@ -4697,8 +4663,8 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. top: 0; left: 0; opacity: 0; - background: -webkit-radial-gradient(50% 50%, farthest-corner, rgba(50, 50, 50, .5) 50%, #323232 100%); - background: -o-radial-gradient(50% 50%, farthest-corner, rgba(50, 50, 50, .5) 50%, #323232 100%); + background: -webkit-radial-gradient(farthest-corner at 50% 50%, rgba(50, 50, 50, .5) 50%, #323232 100%); + background: -o-radial-gradient(farthest-corner at 50% 50%, rgba(50, 50, 50, .5) 50%, #323232 100%); background: radial-gradient(farthest-corner at 50% 50%, rgba(50, 50, 50, .5) 50%, #323232 100%) } @@ -4750,7 +4716,7 @@ body.admin td > a:hover { .glyphicon-ok::before { content: "\EA55"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-weight: 400 } @@ -4819,7 +4785,7 @@ body:not(.blur):not(.login):not(.me):not(.author):not(.editbook):not(.upload):no background-position: center center, center center, center center !important; background-size: auto, auto, cover !important; -webkit-background-size: auto, auto, cover !important; - -moz-background-size: autom, auto, cover !important; + -moz-background-size: auto, auto, cover !important; -o-background-size: auto, auto, cover !important; width: 100%; height: 60px; @@ -4885,7 +4851,6 @@ body.read:not(.blur) a[href*=readbooks] { .tooltip.in { opacity: 1; -o-transition: opacity .15s ease-out, transform .15s cubic-bezier(.6, .4, .2, 1.4); - transition: opacity .15s ease-out, transform .15s cubic-bezier(.6, .4, .2, 1.4); transition: opacity .15s ease-out, transform .15s cubic-bezier(.6, .4, .2, 1.4), -webkit-transform .15s cubic-bezier(.6, .4, .2, 1.4); -webkit-transform: translate(0) scale(1); -ms-transform: translate(0) scale(1); @@ -4985,7 +4950,7 @@ body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-3 > div.text-center > #delete:before, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-3 > div.text-center > #delete:before { content: "\EA6D"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-size: 18px; color: hsla(0, 0%, 100%, .7) } @@ -5070,7 +5035,7 @@ body.tasks > div.container-fluid > div > div.col-sm-10 > div.discover > div.boot body.tasks > div.container-fluid > div > div.col-sm-10 > div.discover > div.bootstrap-table > div.fixed-table-container > div.fixed-table-body > #table > thead > tr > th > .th-inner.asc:after { content: "\EA58"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-weight: 400; right: 20px; position: absolute @@ -5078,7 +5043,7 @@ body.tasks > div.container-fluid > div > div.col-sm-10 > div.discover > div.boot body.tasks > div.container-fluid > div > div.col-sm-10 > div.discover > div.bootstrap-table > div.fixed-table-container > div.fixed-table-body > #table > thead > tr > th > .th-inner.desc:after { content: "\EA57"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-weight: 400; right: 20px; position: absolute @@ -5141,7 +5106,7 @@ body.tasks > div.container-fluid > div > div.col-sm-10 > div.discover > div.boot .epub-back:before { content: "\EA1C"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-weight: 400; color: #4f4f4f; position: absolute; @@ -5304,7 +5269,7 @@ body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete:before, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete:before { content: "\EA6D"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-size: 18px; color: hsla(0, 0%, 100%, .7); vertical-align: super @@ -5464,7 +5429,7 @@ body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm #main-nav + #scnd-nav .create-shelf a:before { content: "\EA13"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-size: 100%; padding-right: 10px; vertical-align: middle @@ -5509,7 +5474,7 @@ body.admin.modal-open .navbar { content: "\E208"; padding-right: 10px; display: block; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; font-style: normal; font-weight: 400; position: absolute; @@ -5520,7 +5485,6 @@ body.admin.modal-open .navbar { opacity: .5; -webkit-transition: -webkit-transform .3s ease-out; -o-transition: transform .3s ease-out; - transition: transform .3s ease-out; transition: transform .3s ease-out, -webkit-transform .3s ease-out; -webkit-transform: translate(0, -60px); -ms-transform: translate(0, -60px); @@ -5574,22 +5538,22 @@ body.admin.modal-open .navbar { #RestartDialog > .modal-dialog > .modal-content > .modal-header:before { content: "\EA4F"; - font-family: plex-icons-new + font-family: plex-icons-new, serif } #ShutdownDialog > .modal-dialog > .modal-content > .modal-header:before { content: "\E064"; - font-family: glyphicons regular + font-family: glyphicons regular, serif } #StatusDialog > .modal-dialog > .modal-content > .modal-header:before { content: "\EA15"; - font-family: plex-icons-new + font-family: plex-icons-new, serif } #deleteModal > .modal-dialog > .modal-content > .modal-header:before { content: "\EA6D"; - font-family: plex-icons-new + font-family: plex-icons-new, serif } #RestartDialog > .modal-dialog > .modal-content > .modal-header:after { @@ -5980,7 +5944,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. .home-btn { height: 48px; - line-height: 28.29px; + line-height: 28px; right: 10px; left: auto } @@ -5992,7 +5956,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. .plexBack { height: 48px; - line-height: 28.29px; + line-height: 28px; left: 48px; display: none } @@ -6071,7 +6035,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. body > div.navbar.navbar-default.navbar-static-top > div > form.search-focus > div > span.input-group-btn:before { content: "\EA33"; display: block; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; position: fixed; left: 0; top: 0; @@ -6223,7 +6187,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. #form-upload .form-group .btn:before { content: "\e043"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; line-height: 1; -webkit-font-smoothing: antialiased; color: #fff; @@ -6241,7 +6205,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. #form-upload .form-group .btn:after { content: "\EA13"; position: absolute; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-size: 8px; background: #3c444a; color: #fff; @@ -6294,7 +6258,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. } #top_admin, #top_tasks { - padding: 11.5px 15px; + padding: 12px 15px; font-size: 13px; line-height: 1.71428571; overflow: hidden @@ -6303,7 +6267,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. #top_admin > .glyphicon, #top_tasks > .glyphicon-tasks { position: relative; top: 0; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; line-height: 1; border-radius: 0; background: 0 0; @@ -6322,7 +6286,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. #top_tasks > .glyphicon-tasks::before, body > div.navbar.navbar-default.navbar-static-top > div > div.navbar-collapse.collapse > ul > li > #top_admin > .glyphicon-dashboard::before { text-transform: none; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-rendering: optimizeLegibility; @@ -6647,7 +6611,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. .author > .container-fluid > .row-fluid > .col-sm-10 > h2:after { content: "\e008"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; line-height: 1; @@ -6852,7 +6816,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. color: hsla(0, 0%, 100%, .7); cursor: pointer; display: block; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; font-size: 20px; font-stretch: 100%; font-style: normal; @@ -6981,16 +6945,12 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. margin: 45px } - .book-meta > .bookinfo > .publishing-date > span:last-of-type { - padding-left: 25px - } - - .book-meta > .bookinfo > .publishers > span:last-of-type { - padding-left: 70px - } - - .book-meta > .bookinfo > .languages > span:last-of-type { - padding-left: 65px + .book-meta > .bookinfo > .languages > span:first-of-type, + .book-meta > .bookinfo > .publishers > span:first-of-type, + .book-meta > .bookinfo > .publishing-date > span:first-of-type, + .real_custom_columns > span:first-of-type { + width: 50%; + max-width: 200px; } .book-meta > .bookinfo .publishers { @@ -7023,11 +6983,9 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. -webkit-transform: translateY(-50%); -ms-transform: translateY(-50%); transform: translateY(-50%); - border-style: solid; vertical-align: middle; -webkit-transition: border .2s, -webkit-transform .4s; -o-transition: border .2s, transform .4s; - transition: border .2s, transform .4s; transition: border .2s, transform .4s, -webkit-transform .4s; margin: 12px 6px } @@ -7046,18 +7004,16 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. -webkit-transform: translateY(-50%); -ms-transform: translateY(-50%); transform: translateY(-50%); - border-style: solid; vertical-align: middle; -webkit-transition: border .2s, -webkit-transform .4s; -o-transition: border .2s, transform .4s; - transition: border .2s, transform .4s; transition: border .2s, transform .4s, -webkit-transform .4s; margin: 9px 6px } body.author:not(.authorlist) .blur-wrapper:before, body.author > .container-fluid > .row-fluid > .col-sm-10 > h2:after { content: "\e008"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-weight: 400; z-index: 9; line-height: 1; @@ -7388,7 +7344,6 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. transform: translate3d(0, 0, 0); -webkit-transition: -webkit-transform .5s; -o-transition: transform .5s; - transition: transform .5s; transition: transform .5s, -webkit-transform .5s; z-index: 99 } @@ -7403,7 +7358,6 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. transform: translate3d(-240px, 0, 0); -webkit-transition: -webkit-transform .5s; -o-transition: transform .5s; - transition: transform .5s; transition: transform .5s, -webkit-transform .5s; top: 0; margin: 0; @@ -7442,7 +7396,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. text-align: center; min-width: 40px; pointer-events: none; - color: # + // color: # } .col-xs-12 > .row > .col-xs-10 { @@ -7553,7 +7507,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div. body.publisherlist > div.container-fluid > div > div.col-sm-10:before { content: "\e241"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; line-height: 1; @@ -7573,7 +7527,7 @@ body.publisherlist > div.container-fluid > div > div.col-sm-10:before { body.ratingslist > div.container-fluid > div > div.col-sm-10:before { content: "\e007"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; line-height: 1; @@ -7599,7 +7553,7 @@ body.ratingslist > div.container-fluid > div > div.col-sm-10:before { body.formatslist > div.container-fluid > div > div.col-sm-10:before { content: "\e022"; - font-family: 'Glyphicons Halflings'; + font-family: 'Glyphicons Halflings', serif; font-style: normal; font-weight: 400; line-height: 1; @@ -7774,7 +7728,7 @@ body.mailset > div.container-fluid > div > div.col-sm-10 > div.discover .editabl body.mailset > div.container-fluid > div > div.col-sm-10 > div.discover .glyphicon-trash:before { content: "\EA6D"; - font-family: plex-icons-new + font-family: plex-icons-new, serif } #DeleteDomain { @@ -7797,7 +7751,7 @@ body.mailset > div.container-fluid > div > div.col-sm-10 > div.discover .glyphic content: "\E208"; padding-right: 10px; display: block; - font-family: Glyphicons Regular; + font-family: Glyphicons Regular, serif; font-style: normal; font-weight: 400; position: absolute; @@ -7808,7 +7762,6 @@ body.mailset > div.container-fluid > div > div.col-sm-10 > div.discover .glyphic opacity: .5; -webkit-transition: -webkit-transform .3s ease-out; -o-transition: transform .3s ease-out; - transition: transform .3s ease-out; transition: transform .3s ease-out, -webkit-transform .3s ease-out; -webkit-transform: translate(0, -60px); -ms-transform: translate(0, -60px); @@ -7847,7 +7800,7 @@ body.mailset > div.container-fluid > div > div.col-sm-10 > div.discover .glyphic #DeleteDomain > .modal-dialog > .modal-content > .modal-header:before { content: "\EA6D"; - font-family: plex-icons-new; + font-family: plex-icons-new, serif; padding-right: 10px; font-size: 18px; color: #999; diff --git a/cps/static/css/caliBlur_override.css b/cps/static/css/caliBlur_override.css index 05a7c0d8..4c8b6cb0 100644 --- a/cps/static/css/caliBlur_override.css +++ b/cps/static/css/caliBlur_override.css @@ -1,17 +1,24 @@ -body.serieslist.grid-view div.container-fluid>div>div.col-sm-10:before{ - display: none; +body.serieslist.grid-view div.container-fluid > div > div.col-sm-10::before { + display: none; } -.cover .badge{ - position: absolute; - top: 0; - right: 0; - background-color: #cc7b19; - border-radius: 0; - padding: 0 8px; - box-shadow: 0 0 4px rgba(0,0,0,.6); - line-height: 24px; +.cover .badge { + position: absolute; + top: 0; + left: 0; + color: #fff; + background-color: #cc7b19; + border-radius: 0; + padding: 0 8px; + box-shadow: 0 0 4px rgba(0, 0, 0, 0.6); + line-height: 24px; } -.cover{ - box-shadow: 0 0 4px rgba(0,0,0,.6); + +.cover { + box-shadow: 0 0 4px rgba(0, 0, 0, 0.6); +} + +.cover .read { + padding: 0 0; + line-height: 15px; } diff --git a/cps/static/css/kthoom.css b/cps/static/css/kthoom.css index cc38740b..233cfe94 100644 --- a/cps/static/css/kthoom.css +++ b/cps/static/css/kthoom.css @@ -33,7 +33,6 @@ body { position: relative; cursor: pointer; padding: 4px; - transition: all 0.2s ease; } @@ -45,7 +44,7 @@ body { #sidebar a.active, #sidebar a.active img + span { - background-color: #45B29D; + background-color: #45b29d; } #sidebar li img { @@ -99,7 +98,7 @@ body { background-color: #ccc; } -#progress .bar-read { +#progress .bar-read { color: #fff; background-color: #45b29d; } diff --git a/cps/static/css/libs/bootstrap-select.min.css b/cps/static/css/libs/bootstrap-select.min.css new file mode 100644 index 00000000..59708ed5 --- /dev/null +++ b/cps/static/css/libs/bootstrap-select.min.css @@ -0,0 +1,6 @@ +/*! + * Bootstrap-select v1.13.14 (https://developer.snapappointments.com/bootstrap-select) + * + * Copyright 2012-2020 SnapAppointments, LLC + * Licensed under MIT (https://github.com/snapappointments/bootstrap-select/blob/master/LICENSE) + */@-webkit-keyframes bs-notify-fadeOut{0%{opacity:.9}100%{opacity:0}}@-o-keyframes bs-notify-fadeOut{0%{opacity:.9}100%{opacity:0}}@keyframes bs-notify-fadeOut{0%{opacity:.9}100%{opacity:0}}.bootstrap-select>select.bs-select-hidden,select.bs-select-hidden,select.selectpicker{display:none!important}.bootstrap-select{width:220px\0;vertical-align:middle}.bootstrap-select>.dropdown-toggle{position:relative;width:100%;text-align:right;white-space:nowrap;display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-box-align:center;-webkit-align-items:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:justify;-webkit-justify-content:space-between;-ms-flex-pack:justify;justify-content:space-between}.bootstrap-select>.dropdown-toggle:after{margin-top:-1px}.bootstrap-select>.dropdown-toggle.bs-placeholder,.bootstrap-select>.dropdown-toggle.bs-placeholder:active,.bootstrap-select>.dropdown-toggle.bs-placeholder:focus,.bootstrap-select>.dropdown-toggle.bs-placeholder:hover{color:#999}.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-danger,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-danger:active,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-danger:focus,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-danger:hover,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-dark,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-dark:active,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-dark:focus,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-dark:hover,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-info,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-info:active,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-info:focus,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-info:hover,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-primary,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-primary:active,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-primary:focus,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-primary:hover,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-secondary,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-secondary:active,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-secondary:focus,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-secondary:hover,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-success,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-success:active,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-success:focus,.bootstrap-select>.dropdown-toggle.bs-placeholder.btn-success:hover{color:rgba(255,255,255,.5)}.bootstrap-select>select{position:absolute!important;bottom:0;left:50%;display:block!important;width:.5px!important;height:100%!important;padding:0!important;opacity:0!important;border:none;z-index:0!important}.bootstrap-select>select.mobile-device{top:0;left:0;display:block!important;width:100%!important;z-index:2!important}.bootstrap-select.is-invalid .dropdown-toggle,.error .bootstrap-select .dropdown-toggle,.has-error .bootstrap-select .dropdown-toggle,.was-validated .bootstrap-select select:invalid+.dropdown-toggle{border-color:#b94a48}.bootstrap-select.is-valid .dropdown-toggle,.was-validated .bootstrap-select select:valid+.dropdown-toggle{border-color:#28a745}.bootstrap-select.fit-width{width:auto!important}.bootstrap-select:not([class*=col-]):not([class*=form-control]):not(.input-group-btn){width:220px}.bootstrap-select .dropdown-toggle:focus,.bootstrap-select>select.mobile-device:focus+.dropdown-toggle{outline:thin dotted #333!important;outline:5px auto -webkit-focus-ring-color!important;outline-offset:-2px}.bootstrap-select.form-control{margin-bottom:0;padding:0;border:none;height:auto}:not(.input-group)>.bootstrap-select.form-control:not([class*=col-]){width:100%}.bootstrap-select.form-control.input-group-btn{float:none;z-index:auto}.form-inline .bootstrap-select,.form-inline .bootstrap-select.form-control:not([class*=col-]){width:auto}.bootstrap-select:not(.input-group-btn),.bootstrap-select[class*=col-]{float:none;display:inline-block;margin-left:0}.bootstrap-select.dropdown-menu-right,.bootstrap-select[class*=col-].dropdown-menu-right,.row .bootstrap-select[class*=col-].dropdown-menu-right{float:right}.form-group .bootstrap-select,.form-horizontal .bootstrap-select,.form-inline .bootstrap-select{margin-bottom:0}.form-group-lg .bootstrap-select.form-control,.form-group-sm .bootstrap-select.form-control{padding:0}.form-group-lg .bootstrap-select.form-control .dropdown-toggle,.form-group-sm .bootstrap-select.form-control .dropdown-toggle{height:100%;font-size:inherit;line-height:inherit;border-radius:inherit}.bootstrap-select.form-control-lg .dropdown-toggle,.bootstrap-select.form-control-sm .dropdown-toggle{font-size:inherit;line-height:inherit;border-radius:inherit}.bootstrap-select.form-control-sm .dropdown-toggle{padding:.25rem .5rem}.bootstrap-select.form-control-lg .dropdown-toggle{padding:.5rem 1rem}.form-inline .bootstrap-select .form-control{width:100%}.bootstrap-select.disabled,.bootstrap-select>.disabled{cursor:not-allowed}.bootstrap-select.disabled:focus,.bootstrap-select>.disabled:focus{outline:0!important}.bootstrap-select.bs-container{position:absolute;top:0;left:0;height:0!important;padding:0!important}.bootstrap-select.bs-container .dropdown-menu{z-index:1060}.bootstrap-select .dropdown-toggle .filter-option{position:static;top:0;left:0;float:left;height:100%;width:100%;text-align:left;overflow:hidden;-webkit-box-flex:0;-webkit-flex:0 1 auto;-ms-flex:0 1 auto;flex:0 1 auto}.bs3.bootstrap-select .dropdown-toggle .filter-option{padding-right:inherit}.input-group .bs3-has-addon.bootstrap-select .dropdown-toggle .filter-option{position:absolute;padding-top:inherit;padding-bottom:inherit;padding-left:inherit;float:none}.input-group .bs3-has-addon.bootstrap-select .dropdown-toggle .filter-option .filter-option-inner{padding-right:inherit}.bootstrap-select .dropdown-toggle .filter-option-inner-inner{overflow:hidden}.bootstrap-select .dropdown-toggle .filter-expand{width:0!important;float:left;opacity:0!important;overflow:hidden}.bootstrap-select .dropdown-toggle .caret{position:absolute;top:50%;right:12px;margin-top:-2px;vertical-align:middle}.input-group .bootstrap-select.form-control .dropdown-toggle{border-radius:inherit}.bootstrap-select[class*=col-] .dropdown-toggle{width:100%}.bootstrap-select .dropdown-menu{min-width:100%;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}.bootstrap-select .dropdown-menu>.inner:focus{outline:0!important}.bootstrap-select .dropdown-menu.inner{position:static;float:none;border:0;padding:0;margin:0;border-radius:0;-webkit-box-shadow:none;box-shadow:none}.bootstrap-select .dropdown-menu li{position:relative}.bootstrap-select .dropdown-menu li.active small{color:rgba(255,255,255,.5)!important}.bootstrap-select .dropdown-menu li.disabled a{cursor:not-allowed}.bootstrap-select .dropdown-menu li a{cursor:pointer;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.bootstrap-select .dropdown-menu li a.opt{position:relative;padding-left:2.25em}.bootstrap-select .dropdown-menu li a span.check-mark{display:none}.bootstrap-select .dropdown-menu li a span.text{display:inline-block}.bootstrap-select .dropdown-menu li small{padding-left:.5em}.bootstrap-select .dropdown-menu .notify{position:absolute;bottom:5px;width:96%;margin:0 2%;min-height:26px;padding:3px 5px;background:#f5f5f5;border:1px solid #e3e3e3;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,.05);box-shadow:inset 0 1px 1px rgba(0,0,0,.05);pointer-events:none;opacity:.9;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}.bootstrap-select .dropdown-menu .notify.fadeOut{-webkit-animation:.3s linear 750ms forwards bs-notify-fadeOut;-o-animation:.3s linear 750ms forwards bs-notify-fadeOut;animation:.3s linear 750ms forwards bs-notify-fadeOut}.bootstrap-select .no-results{padding:3px;background:#f5f5f5;margin:0 5px;white-space:nowrap}.bootstrap-select.fit-width .dropdown-toggle .filter-option{position:static;display:inline;padding:0}.bootstrap-select.fit-width .dropdown-toggle .filter-option-inner,.bootstrap-select.fit-width .dropdown-toggle .filter-option-inner-inner{display:inline}.bootstrap-select.fit-width .dropdown-toggle .bs-caret:before{content:'\00a0'}.bootstrap-select.fit-width .dropdown-toggle .caret{position:static;top:auto;margin-top:-1px}.bootstrap-select.show-tick .dropdown-menu .selected span.check-mark{position:absolute;display:inline-block;right:15px;top:5px}.bootstrap-select.show-tick .dropdown-menu li a span.text{margin-right:34px}.bootstrap-select .bs-ok-default:after{content:'';display:block;width:.5em;height:1em;border-style:solid;border-width:0 .26em .26em 0;-webkit-transform:rotate(45deg);-ms-transform:rotate(45deg);-o-transform:rotate(45deg);transform:rotate(45deg)}.bootstrap-select.show-menu-arrow.open>.dropdown-toggle,.bootstrap-select.show-menu-arrow.show>.dropdown-toggle{z-index:1061}.bootstrap-select.show-menu-arrow .dropdown-toggle .filter-option:before{content:'';border-left:7px solid transparent;border-right:7px solid transparent;border-bottom:7px solid rgba(204,204,204,.2);position:absolute;bottom:-4px;left:9px;display:none}.bootstrap-select.show-menu-arrow .dropdown-toggle .filter-option:after{content:'';border-left:6px solid transparent;border-right:6px solid transparent;border-bottom:6px solid #fff;position:absolute;bottom:-4px;left:10px;display:none}.bootstrap-select.show-menu-arrow.dropup .dropdown-toggle .filter-option:before{bottom:auto;top:-4px;border-top:7px solid rgba(204,204,204,.2);border-bottom:0}.bootstrap-select.show-menu-arrow.dropup .dropdown-toggle .filter-option:after{bottom:auto;top:-4px;border-top:6px solid #fff;border-bottom:0}.bootstrap-select.show-menu-arrow.pull-right .dropdown-toggle .filter-option:before{right:12px;left:auto}.bootstrap-select.show-menu-arrow.pull-right .dropdown-toggle .filter-option:after{right:13px;left:auto}.bootstrap-select.show-menu-arrow.open>.dropdown-toggle .filter-option:after,.bootstrap-select.show-menu-arrow.open>.dropdown-toggle .filter-option:before,.bootstrap-select.show-menu-arrow.show>.dropdown-toggle .filter-option:after,.bootstrap-select.show-menu-arrow.show>.dropdown-toggle .filter-option:before{display:block}.bs-actionsbox,.bs-donebutton,.bs-searchbox{padding:4px 8px}.bs-actionsbox{width:100%;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}.bs-actionsbox .btn-group button{width:50%}.bs-donebutton{float:left;width:100%;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}.bs-donebutton .btn-group button{width:100%}.bs-searchbox+.bs-actionsbox{padding:0 8px 4px}.bs-searchbox .form-control{margin-bottom:0;width:100%;float:none} \ No newline at end of file diff --git a/cps/static/css/libs/bootstrap-table.min.css b/cps/static/css/libs/bootstrap-table.min.css index 770b6728..72a8f74f 100644 --- a/cps/static/css/libs/bootstrap-table.min.css +++ b/cps/static/css/libs/bootstrap-table.min.css @@ -1 +1,10 @@ -.fixed-table-container .bs-checkbox,.fixed-table-container .no-records-found{text-align:center}.fixed-table-body thead th .th-inner,.table td,.table th{box-sizing:border-box}.bootstrap-table .table{margin-bottom:0!important;border-bottom:1px solid #ddd;border-collapse:collapse!important;border-radius:1px}.bootstrap-table .table:not(.table-condensed),.bootstrap-table .table:not(.table-condensed)>tbody>tr>td,.bootstrap-table .table:not(.table-condensed)>tbody>tr>th,.bootstrap-table .table:not(.table-condensed)>tfoot>tr>td,.bootstrap-table .table:not(.table-condensed)>tfoot>tr>th,.bootstrap-table .table:not(.table-condensed)>thead>tr>td{padding:8px}.bootstrap-table .table.table-no-bordered>tbody>tr>td,.bootstrap-table .table.table-no-bordered>thead>tr>th{border-right:2px solid transparent}.bootstrap-table .table.table-no-bordered>tbody>tr>td:last-child{border-right:none}.fixed-table-container{position:relative;clear:both;border:1px solid #ddd;border-radius:4px;-webkit-border-radius:4px;-moz-border-radius:4px}.fixed-table-container.table-no-bordered{border:1px solid transparent}.fixed-table-footer,.fixed-table-header{overflow:hidden}.fixed-table-footer{border-top:1px solid #ddd}.fixed-table-body{overflow-x:auto;overflow-y:auto;height:100%}.fixed-table-container table{width:100%}.fixed-table-container thead th{height:0;padding:0;margin:0;border-left:1px solid #ddd}.fixed-table-container thead th:focus{outline:transparent solid 0}.fixed-table-container thead th:first-child:not([data-not-first-th]){border-left:none;border-top-left-radius:4px;-webkit-border-top-left-radius:4px;-moz-border-radius-topleft:4px}.fixed-table-container tbody td .th-inner,.fixed-table-container thead th .th-inner{padding:8px;line-height:24px;vertical-align:top;overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.fixed-table-container thead th .sortable{cursor:pointer;background-position:right;background-repeat:no-repeat;padding-right:30px}.fixed-table-container thead th .both{background-image:url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABMAAAATCAQAAADYWf5HAAAAkElEQVQoz7X QMQ5AQBCF4dWQSJxC5wwax1Cq1e7BAdxD5SL+Tq/QCM1oNiJidwox0355mXnG/DrEtIQ6azioNZQxI0ykPhTQIwhCR+BmBYtlK7kLJYwWCcJA9M4qdrZrd8pPjZWPtOqdRQy320YSV17OatFC4euts6z39GYMKRPCTKY9UnPQ6P+GtMRfGtPnBCiqhAeJPmkqAAAAAElFTkSuQmCC')}.fixed-table-container thead th .asc{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABMAAAATCAYAAAByUDbMAAAAZ0lEQVQ4y2NgGLKgquEuFxBPAGI2ahhWCsS/gDibUoO0gPgxEP8H4ttArEyuQYxAPBdqEAxPBImTY5gjEL9DM+wTENuQahAvEO9DMwiGdwAxOymGJQLxTyD+jgWDxCMZRsEoGAVoAADeemwtPcZI2wAAAABJRU5ErkJggg==)}.fixed-table-container thead th .desc{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABMAAAATCAYAAAByUDbMAAAAZUlEQVQ4y2NgGAWjYBSggaqGu5FA/BOIv2PBIPFEUgxjB+IdQPwfC94HxLykus4GiD+hGfQOiB3J8SojEE9EM2wuSJzcsFMG4ttQgx4DsRalkZENxL+AuJQaMcsGxBOAmGvopk8AVz1sLZgg0bsAAAAASUVORK5CYII=)}.fixed-table-container th.detail{width:30px}.fixed-table-container tbody td{border-left:1px solid #ddd}.fixed-table-container tbody tr:first-child td{border-top:none}.fixed-table-container tbody td:first-child{border-left:none}.fixed-table-container tbody .selected td{background-color:#f5f5f5}.fixed-table-container input[type=radio],.fixed-table-container input[type=checkbox]{margin:0 auto!important}.fixed-table-pagination .pagination-detail,.fixed-table-pagination div.pagination{margin-top:10px;margin-bottom:10px}.fixed-table-pagination div.pagination .pagination{margin:0}.fixed-table-pagination .pagination a{padding:6px 12px;line-height:1.428571429}.fixed-table-pagination .pagination-info{line-height:34px;margin-right:5px}.fixed-table-pagination .btn-group{position:relative;display:inline-block;vertical-align:middle}.fixed-table-pagination .dropup .dropdown-menu{margin-bottom:0}.fixed-table-pagination .page-list{display:inline-block}.fixed-table-toolbar .columns-left{margin-right:5px}.fixed-table-toolbar .columns-right{margin-left:5px}.fixed-table-toolbar .columns label{display:block;padding:3px 20px;clear:both;font-weight:400;line-height:1.428571429}.fixed-table-toolbar .bs-bars,.fixed-table-toolbar .columns,.fixed-table-toolbar .search{position:relative;margin-top:10px;margin-bottom:10px;line-height:34px}.fixed-table-pagination li.disabled a{pointer-events:none;cursor:default}.fixed-table-loading{display:none;position:absolute;top:42px;right:0;bottom:0;left:0;z-index:99;background-color:#fff;text-align:center}.fixed-table-body .card-view .title{font-weight:700;display:inline-block;min-width:30%;text-align:left!important}.table td,.table th{vertical-align:middle}.fixed-table-toolbar .dropdown-menu{text-align:left;max-height:300px;overflow:auto}.fixed-table-toolbar .btn-group>.btn-group{display:inline-block;margin-left:-1px!important}.fixed-table-toolbar .btn-group>.btn-group>.btn{border-radius:0}.fixed-table-toolbar .btn-group>.btn-group:first-child>.btn{border-top-left-radius:4px;border-bottom-left-radius:4px}.fixed-table-toolbar .btn-group>.btn-group:last-child>.btn{border-top-right-radius:4px;border-bottom-right-radius:4px}.bootstrap-table .table>thead>tr>th{vertical-align:bottom;border-bottom:1px solid #ddd}.bootstrap-table .table thead>tr>th{padding:0;margin:0}.bootstrap-table .fixed-table-footer tbody>tr>td{padding:0!important}.bootstrap-table .fixed-table-footer .table{border-bottom:none;border-radius:0;padding:0!important}.bootstrap-table .pull-right .dropdown-menu{right:0;left:auto}p.fixed-table-scroll-inner{width:100%;height:200px}div.fixed-table-scroll-outer{top:0;left:0;visibility:hidden;width:200px;height:150px;overflow:hidden}.fixed-table-pagination:after,.fixed-table-toolbar:after{content:"";display:block;clear:both}.fullscreen{position:fixed;top:0;left:0;z-index:1050;width:100%!important;background:#FFF} \ No newline at end of file +/** + * bootstrap-table - An extended table to integration with some of the most widely used CSS frameworks. (Supports Bootstrap, Semantic UI, Bulma, Material Design, Foundation) + * + * @version v1.16.0 + * @homepage https://bootstrap-table.com + * @author wenzhixin (http://wenzhixin.net.cn/) + * @license MIT + */ + +.bootstrap-table .fixed-table-toolbar::after{content:"";display:block;clear:both}.bootstrap-table .fixed-table-toolbar .bs-bars,.bootstrap-table .fixed-table-toolbar .columns,.bootstrap-table .fixed-table-toolbar .search{position:relative;margin-top:10px;margin-bottom:10px}.bootstrap-table .fixed-table-toolbar .columns .btn-group>.btn-group{display:inline-block;margin-left:-1px!important}.bootstrap-table .fixed-table-toolbar .columns .btn-group>.btn-group>.btn{border-radius:0}.bootstrap-table .fixed-table-toolbar .columns .btn-group>.btn-group:first-child>.btn{border-top-left-radius:4px;border-bottom-left-radius:4px}.bootstrap-table .fixed-table-toolbar .columns .btn-group>.btn-group:last-child>.btn{border-top-right-radius:4px;border-bottom-right-radius:4px}.bootstrap-table .fixed-table-toolbar .columns .dropdown-menu{text-align:left;max-height:300px;overflow:auto;-ms-overflow-style:scrollbar;z-index:1001}.bootstrap-table .fixed-table-toolbar .columns label{display:block;padding:3px 20px;clear:both;font-weight:400;line-height:1.428571429}.bootstrap-table .fixed-table-toolbar .columns-left{margin-right:5px}.bootstrap-table .fixed-table-toolbar .columns-right{margin-left:5px}.bootstrap-table .fixed-table-toolbar .pull-right .dropdown-menu{right:0;left:auto}.bootstrap-table .fixed-table-container{position:relative;clear:both}.bootstrap-table .fixed-table-container .table{width:100%;margin-bottom:0!important}.bootstrap-table .fixed-table-container .table td,.bootstrap-table .fixed-table-container .table th{vertical-align:middle;box-sizing:border-box}.bootstrap-table .fixed-table-container .table thead th{vertical-align:bottom;padding:0;margin:0}.bootstrap-table .fixed-table-container .table thead th:focus{outline:0 solid transparent}.bootstrap-table .fixed-table-container .table thead th.detail{width:30px}.bootstrap-table .fixed-table-container .table thead th .th-inner{padding:.75rem;vertical-align:bottom;overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.bootstrap-table .fixed-table-container .table thead th .sortable{cursor:pointer;background-position:right;background-repeat:no-repeat;padding-right:30px!important}.bootstrap-table .fixed-table-container .table thead th .both{background-image:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABMAAAATCAQAAADYWf5HAAAAkElEQVQoz7X QMQ5AQBCF4dWQSJxC5wwax1Cq1e7BAdxD5SL+Tq/QCM1oNiJidwox0355mXnG/DrEtIQ6azioNZQxI0ykPhTQIwhCR+BmBYtlK7kLJYwWCcJA9M4qdrZrd8pPjZWPtOqdRQy320YSV17OatFC4euts6z39GYMKRPCTKY9UnPQ6P+GtMRfGtPnBCiqhAeJPmkqAAAAAElFTkSuQmCC")}.bootstrap-table .fixed-table-container .table thead th .asc{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABMAAAATCAYAAAByUDbMAAAAZ0lEQVQ4y2NgGLKgquEuFxBPAGI2ahhWCsS/gDibUoO0gPgxEP8H4ttArEyuQYxAPBdqEAxPBImTY5gjEL9DM+wTENuQahAvEO9DMwiGdwAxOymGJQLxTyD+jgWDxCMZRsEoGAVoAADeemwtPcZI2wAAAABJRU5ErkJggg==)}.bootstrap-table .fixed-table-container .table thead th .desc{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABMAAAATCAYAAAByUDbMAAAAZUlEQVQ4y2NgGAWjYBSggaqGu5FA/BOIv2PBIPFEUgxjB+IdQPwfC94HxLykus4GiD+hGfQOiB3J8SojEE9EM2wuSJzcsFMG4ttQgx4DsRalkZENxL+AuJQaMcsGxBOAmGvopk8AVz1sLZgg0bsAAAAASUVORK5CYII=)}.bootstrap-table .fixed-table-container .table tbody tr.selected td{background-color:rgba(0,0,0,.075)}.bootstrap-table .fixed-table-container .table tbody tr.no-records-found td{text-align:center}.bootstrap-table .fixed-table-container .table tbody tr .card-view{display:flex}.bootstrap-table .fixed-table-container .table tbody tr .card-view .card-view-title{font-weight:700;display:inline-block;min-width:30%;text-align:left!important}.bootstrap-table .fixed-table-container .table .bs-checkbox{text-align:center}.bootstrap-table .fixed-table-container .table .bs-checkbox label{margin-bottom:0}.bootstrap-table .fixed-table-container .table .bs-checkbox label input[type=checkbox],.bootstrap-table .fixed-table-container .table .bs-checkbox label input[type=radio]{margin:0 auto!important}.bootstrap-table .fixed-table-container .table.table-sm .th-inner{padding:.3rem}.bootstrap-table .fixed-table-container.fixed-height:not(.has-footer){border-bottom:1px solid #dee2e6}.bootstrap-table .fixed-table-container.fixed-height.has-card-view{border-top:1px solid #dee2e6;border-bottom:1px solid #dee2e6}.bootstrap-table .fixed-table-container.fixed-height .fixed-table-border{border-left:1px solid #dee2e6;border-right:1px solid #dee2e6}.bootstrap-table .fixed-table-container.fixed-height .table thead th{border-bottom:1px solid #dee2e6}.bootstrap-table .fixed-table-container.fixed-height .table-dark thead th{border-bottom:1px solid #32383e}.bootstrap-table .fixed-table-container .fixed-table-header{overflow:hidden}.bootstrap-table .fixed-table-container .fixed-table-body{overflow-x:auto;overflow-y:auto;height:100%}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading{align-items:center;background:#fff;display:none;justify-content:center;position:absolute;bottom:0;width:100%;z-index:1000}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap{align-items:baseline;display:flex;justify-content:center}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .loading-text{font-size:2rem;margin-right:6px}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-wrap{align-items:center;display:flex;justify-content:center}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-dot,.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-wrap::after,.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-wrap::before{content:"";animation-duration:1.5s;animation-iteration-count:infinite;animation-name:LOADING;background:#212529;border-radius:50%;display:block;height:5px;margin:0 4px;opacity:0;width:5px}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-dot{animation-delay:.3s}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading .loading-wrap .animation-wrap::after{animation-delay:.6s}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading.table-dark{background:#212529}.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading.table-dark .animation-dot,.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading.table-dark .animation-wrap::after,.bootstrap-table .fixed-table-container .fixed-table-body .fixed-table-loading.table-dark .animation-wrap::before{background:#fff}.bootstrap-table .fixed-table-container .fixed-table-footer{overflow:hidden}.bootstrap-table .fixed-table-pagination::after{content:"";display:block;clear:both}.bootstrap-table .fixed-table-pagination>.pagination,.bootstrap-table .fixed-table-pagination>.pagination-detail{margin-top:10px;margin-bottom:10px}.bootstrap-table .fixed-table-pagination>.pagination-detail .pagination-info{line-height:34px;margin-right:5px}.bootstrap-table .fixed-table-pagination>.pagination-detail .page-list{display:inline-block}.bootstrap-table .fixed-table-pagination>.pagination-detail .page-list .btn-group{position:relative;display:inline-block;vertical-align:middle}.bootstrap-table .fixed-table-pagination>.pagination-detail .page-list .btn-group .dropdown-menu{margin-bottom:0}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination{margin:0}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination a{padding:6px 12px;line-height:1.428571429}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination li.page-intermediate a{color:#c8c8c8}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination li.page-intermediate a::before{content:'\2B05'}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination li.page-intermediate a::after{content:'\27A1'}.bootstrap-table .fixed-table-pagination>.pagination ul.pagination li.disabled a{pointer-events:none;cursor:default}.bootstrap-table.fullscreen{position:fixed;top:0;left:0;z-index:1050;width:100%!important;background:#fff;height:calc(100vh);overflow-y:scroll}div.fixed-table-scroll-inner{width:100%;height:200px}div.fixed-table-scroll-outer{top:0;left:0;visibility:hidden;width:200px;height:150px;overflow:hidden}@keyframes LOADING{0%{opacity:0}50%{opacity:1}to{opacity:0}} \ No newline at end of file diff --git a/cps/static/css/listen.css b/cps/static/css/listen.css index a69af72e..9e1d3bb4 100644 --- a/cps/static/css/listen.css +++ b/cps/static/css/listen.css @@ -66,19 +66,12 @@ body { right: 40px; } -xmp, -pre, -plaintext { - display: block; - font-family: -moz-fixed; - white-space: pre; - margin: 1em 0; -} - pre { + display: block; + margin: 1em 0; white-space: pre-wrap; word-wrap: break-word; - font-family: -moz-fixed; + font-family: -moz-fixed, sans-serif; column-count: 2; -webkit-columns: 2; -moz-columns: 2; diff --git a/cps/static/css/main.css b/cps/static/css/main.css index 08506bc2..e97497de 100644 --- a/cps/static/css/main.css +++ b/cps/static/css/main.css @@ -25,10 +25,9 @@ body { overflow: hidden; -webkit-transition: -webkit-transform 0.4s, width 0.2s; -moz-transition: -webkit-transform 0.4s, width 0.2s; - -ms-transition: -webkit-transform 0.4s, width 0.2s; + transition: -webkit-transform 0.4s, width 0.2s; -moz-box-shadow: inset 0 0 50px rgba(0, 0, 0, 0.1); -webkit-box-shadow: inset 0 0 50px rgba(0, 0, 0, 0.1); - -ms-box-shadow: inset 0 0 50px rgba(0, 0, 0, 0.1); box-shadow: inset 0 0 50px rgba(0, 0, 0, 0.1); } @@ -36,7 +35,6 @@ body { height: 8%; min-height: 20px; padding: 10px; - /* margin: 0 50px 0 50px; */ position: relative; color: #4f4f4f; font-weight: 100; @@ -45,7 +43,7 @@ body { text-align: center; -webkit-transition: opacity 0.5s; -moz-transition: opacity 0.5s; - -ms-transition: opacity 0.5s; + transition: opacity 0.5s; z-index: 10; } @@ -74,12 +72,21 @@ body { padding: 3px; } +#panels a { + visibility: hidden; + width: 18px; + height: 20px; + overflow: hidden; + display: inline-block; + color: #ccc; + margin-left: 6px; +} + #titlebar a:active { opacity: 1; color: rgba(0, 0, 0, 0.6); -moz-box-shadow: inset 0 0 6px rgba(155, 155, 155, 0.8); -webkit-box-shadow: inset 0 0 6px rgba(155, 155, 155, 0.8); - -ms-box-shadow: inset 0 0 6px rgba(155, 155, 155, 0.8); box-shadow: inset 0 0 6px rgba(155, 155, 155, 0.8); } @@ -116,12 +123,11 @@ body { top: 50%; margin-top: -192px; font-size: 64px; - color: #E2E2E2; + color: #e2e2e2; font-family: arial, sans-serif; font-weight: bold; cursor: pointer; -webkit-user-select: none; - -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; @@ -147,16 +153,10 @@ body { height: 100%; -webkit-transition: -webkit-transform 0.5s; -moz-transition: -moz-transform 0.5s; - -ms-transition: -moz-transform 0.5s; + transition: -moz-transform 0.5s; overflow: hidden; } -#sidebar.open { - /* left: 0; */ - /* -webkit-transform: translate(0, 0); - -moz-transform: translate(0, 0); */ -} - #main.closed { /* left: 300px; */ -webkit-transform: translate(300px, 0); @@ -183,7 +183,6 @@ body { height: 14px; -moz-box-shadow: 0 1px 3px rgba(0, 0, 0, 0.6); -webkit-box-shadow: 0 1px 3px rgba(0, 0, 0, 0.6); - -ms-box-shadow: 0 1px 3px rgba(0, 0, 0, 0.6); box-shadow: 0 1px 3px rgba(0, 0, 0, 0.6); } @@ -200,19 +199,24 @@ body { #title-controls { float: right; } -#panels a { - visibility: hidden; - width: 18px; - height: 20px; - overflow: hidden; - display: inline-block; - color: #ccc; - margin-left: 6px; -} - #panels a::before { visibility: visible; } #panels a:hover { color: #aaa; } +.list_item.currentChapter > a, +.list_item a:hover { + color: #f1f1f1; +} + +.list_item a { + color: #aaa; + text-decoration: none; +} + +#searchResults a { + color: #aaa; + text-decoration: none; +} + #panels a:active { color: #aaa; margin: 1px 0 -1px 6px; @@ -232,7 +236,6 @@ body { input::-webkit-input-placeholder { color: #454545; } input:-moz-placeholder { color: #454545; } -input:-ms-placeholder { color: #454545; } #divider { position: absolute; @@ -243,7 +246,7 @@ input:-ms-placeholder { color: #454545; } left: 50%; margin-left: -1px; top: 10%; - opacity: .15; + opacity: 0.15; box-shadow: -2px 0 15px rgba(0, 0, 0, 1); display: none; } @@ -268,24 +271,35 @@ input:-ms-placeholder { color: #454545; } width: 25%; height: 100%; visibility: hidden; - -webkit-transition: visibility 0 ease 0.5s; - -moz-transition: visibility 0 ease 0.5s; - -ms-transition: visibility 0 ease 0.5s; + -webkit-transition: visibility 0s ease 0.5s; + -moz-transition: visibility 0s ease 0.5s; + transition: visibility 0s ease 0.5s; } #sidebar.open #tocView, #sidebar.open #bookmarksView { overflow-y: auto; visibility: visible; - -webkit-transition: visibility 0 ease 0; - -moz-transition: visibility 0 ease 0; - -ms-transition: visibility 0 ease 0; + -webkit-transition: visibility 0s ease 0s; + -moz-transition: visibility 0s ease 0s; + transition: visibility 0s ease 0s; } #sidebar.open #tocView { display: block; } +.list_item ul { + padding-left: 10px; + margin-top: 8px; + display: none; +} + +.list_item.currentChapter > ul, +.list_item.openChapter > ul { + display: block; +} + #tocView > ul, #bookmarksView > ul { margin-top: 15px; @@ -296,22 +310,41 @@ input:-ms-placeholder { color: #454545; } #tocView li, #bookmarksView li { - margin-bottom:10px; + margin-bottom: 10px; width: 225px; font-family: Georgia, "Times New Roman", Times, serif; list-style: none; text-transform: capitalize; } -#tocView li:active, -#tocView li.currentChapter -{ +.md-content > div ul li { + padding: 5px 0; +} + +#settingsPanel li { + font-size: 1em; + color: #f1f1f1; +} + +#searchResults li { + margin-bottom: 10px; + width: 225px; + font-family: Georgia, "Times New Roman", Times, serif; list-style: none; } -.list_item a { - color: #aaa; - text-decoration: none; +#notes li { + color: #eee; + font-size: 12px; + width: 240px; + border-top: 1px #fff solid; + padding-top: 6px; + margin-bottom: 6px; +} + +#tocView li:active, +#tocView li.currentChapter { + list-style: none; } .list_item a.chapter { @@ -322,27 +355,11 @@ input:-ms-placeholder { color: #454545; } font-size: 0.8em; } -.list_item.currentChapter > a, -.list_item a:hover { - color: #f1f1f1 -} - /* #tocView li.openChapter > a, */ .list_item a:hover { color: #e2e2e2; } -.list_item ul { - padding-left:10px; - margin-top: 8px; - display: none; -} - -.list_item.currentChapter > ul, -.list_item.openChapter > ul { - display: block; -} - #tocView.hidden { display: none; } @@ -357,14 +374,14 @@ input:-ms-placeholder { color: #454545; } user-select: none; } -.toc_toggle:before { +.toc_toggle::before { content: '▸'; color: #fff; margin-right: -4px; } -.currentChapter > .toc_toggle:before, -.openChapter > .toc_toggle:before { +.currentChapter > .toc_toggle::before, +.openChapter > .toc_toggle::before { content: '▾'; } @@ -382,18 +399,6 @@ input:-ms-placeholder { color: #454545; } display: block; } -#searchResults li { - margin-bottom:10px; - width: 225px; - font-family: Georgia, "Times New Roman", Times, serif; - list-style: none; -} - -#searchResults a { - color: #aaa; - text-decoration: none; -} - #searchResults p { text-decoration: none; font-size: 12px; @@ -405,10 +410,21 @@ input:-ms-placeholder { color: #454545; } color: #000; } +.md-content > div p { + margin: 0; + padding: 10px 0; +} + #searchResults li > p { color: #aaa; } +#notes li a { + color: #fff; + display: inline-block; + margin-left: 6px; +} + #searchResults li a:hover { color: #e2e2e2; } @@ -419,22 +435,7 @@ input:-ms-placeholder { color: #454545; } } #notes { - padding: 0 0 0 34px; -} - -#notes li { - color: #eee; - font-size: 12px; - width: 240px; - border-top: 1px #fff solid; - padding-top: 6px; - margin-bottom: 6px; -} - -#notes li a { - color: #fff; - display: inline-block; - margin-left: 6px; + padding: 0 0 0 34px; } #notes li a:hover { @@ -454,8 +455,9 @@ input:-ms-placeholder { color: #454545; } border-radius: 5px; } -#note-text[disabled], #note-text[disabled="disabled"]{ - opacity: 0.5; +#note-text[disabled], +#note-text[disabled="disabled"] { + opacity: 0.5; } #note-anchor { @@ -467,6 +469,22 @@ input:-ms-placeholder { color: #454545; } display: none; } +.md-content h3 { + margin: 0; + padding: 6px; + text-align: center; + font-size: 22px; + font-weight: 300; + opacity: 0.8; + background: rgba(0, 0, 0, 0.1); + border-radius: 3px 3px 0 0; +} + +.md-content > div ul { + margin: 0; + padding: 0 0 30px 20px; +} + #settingsPanel h3 { color: #f1f1f1; font-family: Georgia, "Times New Roman", Times, serif; @@ -478,32 +496,24 @@ input:-ms-placeholder { color: #454545; } list-style-type: none; } -#settingsPanel li { - font-size: 1em; - color: #f1f1f1; -} +#settingsPanel .xsmall { font-size: x-small; } +#settingsPanel .small { font-size: small; } +#settingsPanel .medium { font-size: medium; } +#settingsPanel .large { font-size: large; } +#settingsPanel .xlarge { font-size: x-large; } -#settingsPanel .xsmall { font-size: x-small; } -#settingsPanel .small { font-size: small; } -#settingsPanel .medium { font-size: medium; } -#settingsPanel .large { font-size: large; } -#settingsPanel .xlarge { font-size: x-large; } - -.highlight { background-color: yellow } +.highlight { background-color: yellow; } .modal { position: fixed; top: 50%; left: 50%; - width: 50%; width: 630px; - height: auto; z-index: 2000; visibility: hidden; margin-left: -320px; margin-top: -160px; - } .overlay { @@ -518,17 +528,16 @@ input:-ms-placeholder { color: #454545; } background: rgba(255, 255, 255, 0.8); -webkit-transition: all 0.3s; -moz-transition: all 0.3s; - -ms-transition: all 0.3s; transition: all 0.3s; } .md-show { - visibility: visible; + visibility: visible; } .md-show ~ .overlay { - opacity: 1; - visibility: visible; + opacity: 1; + visibility: visible; } /* Content styles */ @@ -541,17 +550,6 @@ input:-ms-placeholder { color: #454545; } height: 320px; } -.md-content h3 { - margin: 0; - padding: 6px; - text-align: center; - font-size: 22px; - font-weight: 300; - opacity: 0.8; - background: rgba(0, 0, 0, 0.1); - border-radius: 3px 3px 0 0; -} - .md-content > div { padding: 15px 40px 30px; margin: 0; @@ -559,20 +557,6 @@ input:-ms-placeholder { color: #454545; } font-size: 14px; } -.md-content > div p { - margin: 0; - padding: 10px 0; -} - -.md-content > div ul { - margin: 0; - padding: 0 0 30px 20px; -} - -.md-content > div ul li { - padding: 5px 0; -} - .md-content button { display: block; margin: 0 auto; @@ -588,7 +572,6 @@ input:-ms-placeholder { color: #454545; } opacity: 0; -webkit-transition: all 0.3s; -moz-transition: all 0.3s; - -ms-transition: all 0.3s; transition: all 0.3s; } @@ -601,7 +584,6 @@ input:-ms-placeholder { color: #454545; } } .md-content > .closer { - font-size: 18px; position: absolute; right: 0; top: 0; @@ -610,7 +592,7 @@ input:-ms-placeholder { color: #454545; } } @media only screen and (max-width: 1040px) and (orientation: portrait) { - #viewer{ + #viewer { width: 80%; margin-left: 10%; } @@ -622,7 +604,7 @@ input:-ms-placeholder { color: #454545; } } @media only screen and (max-width: 900px) { - #viewer{ + #viewer { width: 60%; margin-left: 20%; } @@ -637,7 +619,7 @@ input:-ms-placeholder { color: #454545; } } @media only screen and (max-width: 550px) { - #viewer{ + #viewer { width: 80%; margin-left: 10%; } @@ -661,9 +643,9 @@ input:-ms-placeholder { color: #454545; } -webkit-transform: translate(0, 0); -moz-transform: translate(0, 0); -ms-transform: translate(0, 0); - -webkit-transition: -webkit-transform .3s; - -moz-transition: -moz-transform .3s; - -ms-transition: -moz-transform .3s; + -webkit-transition: -webkit-transform 0.3s; + -moz-transition: -moz-transform 0.3s; + transition: -moz-transform 0.3s; } #main.closed { @@ -672,11 +654,6 @@ input:-ms-placeholder { color: #454545; } -ms-transform: translate(260px, 0); } - #titlebar { - /* font-size: 16px; */ - /* margin: 0 50px 0 50px; */ - } - #metainfo { font-size: 10px; } @@ -689,130 +666,129 @@ input:-ms-placeholder { color: #454545; } font-size: 12px; } - #tocView > ul{ + #tocView > ul { padding-left: 10px; } } - /* For iPad portrait layouts only */ @media only screen and (min-device-width: 481px) and (max-device-width: 1024px) and (orientation: portrait) { - #viewer iframe { - width: 460px; - height: 740px; - } + #viewer iframe { + width: 460px; + height: 740px; + } } - /*For iPad landscape layouts only *//* -@media only screen and (min-device-width: 481px) and (max-device-width: 1024px) and (orientation: landscape) { - #viewer iframe { - width: 460px; - height: 415px; - } -}*/ @media only screen -and (min-device-width : 768px) -and (max-device-width : 1024px) -and (orientation : landscape) -/*and (-webkit-min-device-pixel-ratio: 2)*/ { - #viewer{ + and (min-device-width: 768px) + and (max-device-width: 1024px) + and (orientation: landscape) + /* and (-webkit-min-device-pixel-ratio: 2)*/ { + #viewer { width: 80%; margin-left: 10%; } + #divider, #divider.show { display: none; } } - /*For iPad landscape layouts only */ +/* For iPad landscape layouts only */ @media only screen and (min-device-width: 481px) and (max-device-width: 1024px) and (orientation: landscape) { - #viewer iframe { - width: 960px; - height: 515px; - } + #viewer iframe { + width: 960px; + height: 515px; + } } /* For iPhone 6\6s portrait layouts only */ -@media only screen and (min-device-width : 375px) and (max-device-width : 667px) and (orientation: portrait) { - #viewer { - width: 300px; - height: 480px; - } - #viewer iframe { - width: 300px; - height: 480px; - } +@media only screen and (min-device-width: 375px) and (max-device-width: 667px) and (orientation: portrait) { + #viewer { + width: 300px; + height: 480px; + } + + #viewer iframe { + width: 300px; + height: 480px; + } } /* For iPhone 6\6s landscape layouts only */ -@media only screen and (min-device-width : 375px) and (max-device-width : 667px) and (orientation: landscape) { - #viewer { - width: 450px; - height: 300px; - } - #viewer iframe { - width: 450px; - height: 300px; - } +@media only screen and (min-device-width: 375px) and (max-device-width: 667px) and (orientation: landscape) { + #viewer { + width: 450px; + height: 300px; + } + + #viewer iframe { + width: 450px; + height: 300px; + } } /* For iPhone portrait layouts only */ @media only screen and (max-device-width: 374px) and (orientation: portrait) { - #viewer { - width: 256px; - height: 432px; - } - #viewer iframe { - width: 256px; - height: 432px; - } + #viewer { + width: 256px; + height: 432px; + } + + #viewer iframe { + width: 256px; + height: 432px; + } } /* For iPhone landscape layouts only */ @media only screen and (max-device-width: 374px) and (orientation: landscape) { - #viewer iframe { - width: 256px; - height: 124px; - } + #viewer iframe { + width: 256px; + height: 124px; + } } -[class^="icon-"]:before, [class*=" icon-"]:before { - font-family: "fontello"; - font-style: normal; - font-weight: normal; - speak: none; - display: inline-block; - text-decoration: inherit; - width: 1em; - margin-right: 0.2em; - text-align: center; - /* For safety - reset parent styles, that can break glyph codes*/ - font-variant: normal; - text-transform: none; - /* you can be more comfortable with increased icons size */ - font-size: 112%; +[class^="icon-"]::before, +[class*=" icon-"]::before { + font-family: "fontello", serif; + font-style: normal; + font-weight: normal; + speak: none; + display: inline-block; + text-decoration: inherit; + width: 1em; + margin-right: 0.2em; + text-align: center; + + /* For safety - reset parent styles, that can break glyph codes */ + font-variant: normal; + text-transform: none; + + /* you can be more comfortable with increased icons size */ + font-size: 112%; } -.icon-search:before { content: '\e807'; } /* '' */ -.icon-resize-full-1:before { content: '\e804'; } /* '' */ -.icon-cancel-circled2:before { content: '\e80f'; } /* '' */ -.icon-link:before { content: '\e80d'; } /* '' */ -.icon-bookmark:before { content: '\e805'; } /* '' */ -.icon-bookmark-empty:before { content: '\e806'; } /* '' */ -.icon-download-cloud:before { content: '\e811'; } /* '' */ -.icon-edit:before { content: '\e814'; } /* '' */ -.icon-menu:before { content: '\e802'; } /* '' */ -.icon-cog:before { content: '\e813'; } /* '' */ -.icon-resize-full:before { content: '\e812'; } /* '' */ -.icon-cancel-circled:before { content: '\e80e'; } /* '' */ -.icon-up-dir:before { content: '\e80c'; } /* '' */ -.icon-right-dir:before { content: '\e80b'; } /* '' */ -.icon-angle-right:before { content: '\e809'; } /* '' */ -.icon-angle-down:before { content: '\e80a'; } /* '' */ -.icon-right:before { content: '\e815'; } /* '' */ -.icon-list-1:before { content: '\e803'; } /* '' */ -.icon-list-numbered:before { content: '\e801'; } /* '' */ -.icon-columns:before { content: '\e810'; } /* '' */ -.icon-list:before { content: '\e800'; } /* '' */ -.icon-resize-small:before { content: '\e808'; } /* '' */ +.icon-search::before { content: '\e807'; } /* '' */ +.icon-resize-full-1::before { content: '\e804'; } /* '' */ +.icon-cancel-circled2::before { content: '\e80f'; } /* '' */ +.icon-link::before { content: '\e80d'; } /* '' */ +.icon-bookmark::before { content: '\e805'; } /* '' */ +.icon-bookmark-empty::before { content: '\e806'; } /* '' */ +.icon-download-cloud::before { content: '\e811'; } /* '' */ +.icon-edit::before { content: '\e814'; } /* '' */ +.icon-menu::before { content: '\e802'; } /* '' */ +.icon-cog::before { content: '\e813'; } /* '' */ +.icon-resize-full::before { content: '\e812'; } /* '' */ +.icon-cancel-circled::before { content: '\e80e'; } /* '' */ +.icon-up-dir::before { content: '\e80c'; } /* '' */ +.icon-right-dir::before { content: '\e80b'; } /* '' */ +.icon-angle-right::before { content: '\e809'; } /* '' */ +.icon-angle-down::before { content: '\e80a'; } /* '' */ +.icon-right::before { content: '\e815'; } /* '' */ +.icon-list-1::before { content: '\e803'; } /* '' */ +.icon-list-numbered::before { content: '\e801'; } /* '' */ +.icon-columns::before { content: '\e810'; } /* '' */ +.icon-list::before { content: '\e800'; } /* '' */ +.icon-resize-small::before { content: '\e808'; } /* '' */ diff --git a/cps/static/css/style.css b/cps/static/css/style.css index 4d8b4805..02658771 100644 --- a/cps/static/css/style.css +++ b/cps/static/css/style.css @@ -1,7 +1,7 @@ .tooltip.bottom .tooltip-inner { font-size: 13px; - font-family: Open Sans Semibold,Helvetica Neue,Helvetica,Arial,sans-serif; + font-family: Open Sans Semibold, Helvetica Neue, Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; padding: 3px 10px; @@ -28,6 +28,11 @@ html.http-error { height: 100%; } +body { + background: #f2f2f2; + margin-bottom: 40px; +} + .http-error body { margin: 0; height: 100%; @@ -41,17 +46,39 @@ html.http-error { text-align: center; } -body { - background: #f2f2f2; - margin-bottom: 40px; -} - body h2 { font-weight: normal; - color:#444; + color: #444; } -a { color: #45b29d; } +a, +.danger, +.book-remove, +.editable-empty, +.editable-empty:hover { color: #45b29d; } +.user-remove:hover { color: #23527c; } +.btn-default a { color: #444; } +.panel-title > a { text-decoration: none; } + +.navigation li a { + color: #444; + text-decoration: none; + display: block; + padding: 10px; +} + +.btn-default a:hover { + color: #45b29d; + text-decoration: None; +} + +.btn-default:hover { + color: #45b29d; +} + +.editable-click, +a.editable-click, +a.editable-click:hover { border-bottom: None; } .navigation .nav-head { text-transform: uppercase; @@ -63,11 +90,18 @@ a { color: #45b29d; } border-top: 1px solid #ccc; padding-top: 20px; } -.navigation li a { - color: #444; + +.book-meta .tags a { display: inline; } +table .bg-primary a { color: #fff; } +table .bg-dark-danger a { color: #fff; } +.book-meta .identifiers a { display: inline; } + +.navigation .create-shelf a { + color: #fff; + background: #45b29d; + padding: 10px 20px; + border-radius: 5px; text-decoration: none; - display: block; - padding: 10px; } .navigation li a:hover { @@ -83,42 +117,68 @@ a { color: #45b29d; } text-align: center; } -.navigation .create-shelf a { - color: #fff; - background: #45b29d; - padding: 10px 20px; - border-radius: 5px; - text-decoration: none; +.row.display-flex { + display: flex; + flex-wrap: wrap; } .container-fluid img { display: block; max-width: 100%; height: auto; + max-height: 100%; } -.container-fluid .discover{ margin-bottom: 50px; } +.container-fluid .discover { margin-bottom: 50px; } .container-fluid .new-books { border-top: 1px solid #ccc; } .container-fluid .new-books h2 { margin: 50px 0 0 0; } -.container-fluid .book { margin-top: 20px; } + +.container-fluid .book { + margin-top: 20px; + display: flex; + flex-direction: column; +} +.cover { margin-bottom: 10px; } .container-fluid .book .cover { height: 225px; position: relative; } -.container-fluid .book .cover img { +.author-link img { + display: block; + height: 100%; +} + +.container-fluid .book .cover span.img { + bottom: 0; + height: 100%; + position: absolute; +} +.author-bio img { margin: 0 1em 1em 0; } + +.container-fluid .single .cover img { border: 1px solid #fff; box-sizing: border-box; + -webkit-box-shadow: 0 5px 8px -6px #777; + -moz-box-shadow: 0 5px 8px -6px #777; + box-shadow: 0 5px 8px -6px #777; +} + +.container-fluid .book .cover span img { + position: relative; + top: 0; + left: 0; height: 100%; - bottom: 0; - position: absolute; + border: 1px solid #fff; + box-sizing: border-box; -webkit-box-shadow: 0 5px 8px -6px #777; -moz-box-shadow: 0 5px 8px -6px #777; box-shadow: 0 5px 8px -6px #777; } .container-fluid .book .meta { margin-top: 10px; } +.media-body p { text-align: justify; } .container-fluid .book .meta p { margin: 0; } .container-fluid .book .meta .title { @@ -127,6 +187,12 @@ a { color: #45b29d; } color: #444; } +.container-fluid .book .meta .series { + font-weight: 400; + font-size: 12px; + color: #444; +} + .container-fluid .book .meta .author { font-size: 12px; color: #999; @@ -135,9 +201,10 @@ a { color: #45b29d; } .container-fluid .book .meta .rating { margin-top: 5px; } .rating .glyphicon-star-empty { color: #444; } .rating .glyphicon-star.good { color: #444; } -.rating-clear .glyphicon-remove { color: #333 } +.rating-clear .glyphicon-remove { color: #333; } -.container-fluid .author .author-hidden, .container-fluid .author .author-hidden-divider { display: none; } +.container-fluid .author .author-hidden, +.container-fluid .author .author-hidden-divider { display: none; } .navbar-brand { font-family: 'Grand Hotel', cursive; @@ -151,7 +218,7 @@ a { color: #45b29d; } border-top: 1px solid #ccc; } -.more-stuff>li { margin-bottom: 10px; } +.more-stuff > li { margin-bottom: 10px; } .navbar-collapse.in .navbar-nav { margin: 0; } span.glyphicon.glyphicon-tags { @@ -161,34 +228,38 @@ span.glyphicon.glyphicon-tags { } .book-meta { padding-bottom: 20px; } -.book-meta .tags a { display: inline; } -.book-meta .identifiers a { display: inline; } -.container-fluid .single .cover img { - border: 1px solid #fff; - box-sizing: border-box; - -webkit-box-shadow: 0 5px 8px -6px #777; - -moz-box-shadow: 0 5px 8px -6px #777; - box-shadow: 0 5px 8px -6px #777; +.navbar-default .navbar-toggle .icon-bar { background-color: #000; } +.navbar-default .navbar-toggle { border-color: #000; } + +.cover .badge { + position: absolute; + top: 2px; + left: 2px; + color: #000; + border-radius: 10px; + background-color: #fff; } -.navbar-default .navbar-toggle .icon-bar {background-color: #000; } -.navbar-default .navbar-toggle {border-color: #000; } -.cover { margin-bottom: 10px; } -.cover .badge{ - position: absolute; - top: 2px; - left: 2px; - background-color: #777; +.cover .read { + left: auto; + right: 2px; + width: 17px; + height: 17px; + display: inline-block; + padding: 2px; } -.cover-height { max-height: 100px;} +.cover-height { max-height: 100px; } .col-sm-2 a .cover-small { margin: 5px; max-height: 200px; } -.btn-file {position: relative; overflow: hidden;} +.btn-file { + position: relative; + overflow: hidden; +} .btn-file input[type=file] { position: absolute; @@ -206,24 +277,60 @@ span.glyphicon.glyphicon-tags { display: block; } -.btn-toolbar .btn,.discover .btn { margin-bottom: 5px; } -.button-link {color: #fff; } -.btn-primary:hover, .btn-primary:focus, .btn-primary:active, .btn-primary.active, .open .dropdown-toggle.btn-primary{ background-color: #1C5484; } -.btn-primary.disabled, .btn-primary[disabled], fieldset[disabled] .btn-primary, .btn-primary.disabled:hover, .btn-primary[disabled]:hover, fieldset[disabled] .btn-primary:hover, .btn-primary.disabled:focus, .btn-primary[disabled]:focus, fieldset[disabled] .btn-primary:focus, .btn-primary.disabled:active, .btn-primary[disabled]:active, fieldset[disabled] .btn-primary:active, .btn-primary.disabled.active, .btn-primary[disabled].active, fieldset[disabled] .btn-primary.active { background-color: #89B9E2; } -.btn-toolbar>.btn+.btn, .btn-toolbar>.btn-group+.btn, .btn-toolbar>.btn+.btn-group, .btn-toolbar>.btn-group+.btn-group { margin-left: 0px; } -.panel-body {background-color: #f5f5f5; } -.spinner {margin: 0 41%; } -.spinner2 {margin: 0 41%; } -.intend-form { margin-left:20px; } -table .bg-dark-danger {background-color: #d9534f; color: #fff; } -table .bg-dark-danger a {color: #fff; } -table .bg-dark-danger:hover {background-color: #c9302c; } -table .bg-primary:hover {background-color: #1C5484; } -table .bg-primary a {color: #fff; } -.block-label {display: block;} -.fake-input {position: absolute; pointer-events: none; top: 0; } +.btn-toolbar .btn, +.discover .btn { margin-bottom: 5px; } +.button-link { color: #fff; } -input.pill { position: absolute; opacity: 0; } +.btn-primary:hover, +.btn-primary:focus, +.btn-primary:active, +.btn-primary.active, +.open .dropdown-toggle.btn-primary { background-color: #1c5484; } + +.btn-primary.disabled, +.btn-primary[disabled], +fieldset[disabled] .btn-primary, +.btn-primary.disabled:hover, +.btn-primary[disabled]:hover, +fieldset[disabled] .btn-primary:hover, +.btn-primary.disabled:focus, +.btn-primary[disabled]:focus, +fieldset[disabled] .btn-primary:focus, +.btn-primary.disabled:active, +.btn-primary[disabled]:active, +fieldset[disabled] .btn-primary:active, +.btn-primary.disabled.active, +.btn-primary[disabled].active, +fieldset[disabled] .btn-primary.active { background-color: #89b9e2; } + +.btn-toolbar > .btn + .btn, +.btn-toolbar > .btn-group + .btn, +.btn-toolbar > .btn + .btn-group, +.btn-toolbar > .btn-group + .btn-group { margin-left: 0; } + +.panel-body { background-color: #f5f5f5; } +.spinner { margin: 0 41%; } +.spinner2 { margin: 0 41%; } +.intend-form { margin-left: 20px; } + +table .bg-dark-danger { + background-color: #d9534f; + color: #fff; +} +table .bg-dark-danger:hover { background-color: #c9302c; } +table .bg-primary:hover { background-color: #1c5484; } +.block-label { display: block; } + +.fake-input { + position: absolute; + pointer-events: none; + top: 0; +} + +input.pill { + position: absolute; + opacity: 0; +} input.pill + label { border: 2px solid #45b29d; @@ -245,12 +352,18 @@ input.pill:checked + label { input.pill:not(:checked) + label .glyphicon { display: none; } -.author-bio img { margin: 0 1em 1em 0; } -.author-link { display: inline-block; margin-top: 10px; width: 100px; } -.author-link img { display: block; height: 100%; } -#remove-from-shelves .btn, #shelf-action-errors { margin-left: 5px; } +.author-link { + display: inline-block; + margin-top: 10px; + width: 100px; +} -.tags_click, .serie_click, .language_click { margin-right: 5px; } +#remove-from-shelves .btn, +#shelf-action-errors { margin-left: 5px; } + +.tags_click, +.serie_click, +.language_click { margin-right: 5px; } #meta-info { height: 600px; @@ -258,7 +371,6 @@ input.pill:not(:checked) + label .glyphicon { display: none; } } .media-list { padding-right: 15px; } -.media-body p { text-align: justify; } #meta-info img { max-height: 150px; @@ -271,22 +383,23 @@ input.pill:not(:checked) + label .glyphicon { display: none; } #btn-upload-format { display: none; } .upload-cover-input-text { display: initial; } #btn-upload-cover { display: none; } -.panel-title > a { text-decoration: none; } + .editable-buttons { - display:inline-block; + display: inline-block; margin-left: 7px; } -.editable-input { display:inline-block; } +.editable-input { display: inline-block; } .editable-cancel { - margin-bottom: 0px !important; + margin-bottom: 0 !important; margin-left: 7px !important; } -.editable-submit { margin-bottom: 0px !important; } +.editable-submit { margin-bottom: 0 !important; } .filterheader { margin-bottom: 20px; } .errorlink { margin-top: 20px; } +.emailconfig { margin-top: 10px; } .modal-body .comments { max-height: 300px; @@ -294,7 +407,7 @@ input.pill:not(:checked) + label .glyphicon { display: none; } } div.log { - font-family: Courier New; + font-family: Courier New, serif; font-size: 12px; box-sizing: border-box; height: 700px; @@ -303,4 +416,3 @@ div.log { white-space: nowrap; padding: 0.5em; } - diff --git a/cps/static/generic_cover.jpg b/cps/static/generic_cover.jpg index f5c090c1..67f98248 100644 Binary files a/cps/static/generic_cover.jpg and b/cps/static/generic_cover.jpg differ diff --git a/cps/static/js/archive/unrar5.js b/cps/static/js/archive/unrar5.js new file mode 100644 index 00000000..452989c0 --- /dev/null +++ b/cps/static/js/archive/unrar5.js @@ -0,0 +1,1371 @@ +/** + * unrar.js + * + * Licensed under the MIT License + * + * Copyright(c) 2011 Google Inc. + * Copyright(c) 2011 antimatter15 + * + * Reference Documentation: + * + * http://kthoom.googlecode.com/hg/docs/unrar.html + */ +/* global bitjs, importScripts, RarVM, Uint8Array, UnpackFilter */ +/* global VM_FIXEDGLOBALSIZE, VM_GLOBALMEMSIZE, MAXWINMASK, VM_GLOBALMEMADDR, MAXWINSIZE */ + +// This file expects to be invoked as a Worker (see onmessage below). +/*importScripts("../io/bitstream.js"); +importScripts("../io/bytebuffer.js"); +importScripts("archive.js"); +importScripts("rarvm.js");*/ + +// Progress variables. +var currentFilename = ""; +var currentFileNumber = 0; +var currentBytesUnarchivedInFile = 0; +var currentBytesUnarchived = 0; +var totalUncompressedBytesInArchive = 0; +var totalFilesInArchive = 0; + +// Helper functions. +var info = function(str) { + console.log(str) + //postMessage(new bitjs.archive.UnarchiveInfoEvent(str)); +}; +var err = function(str) { + console.log(str) + //postMessage(new bitjs.archive.UnarchiveErrorEvent(str)); +}; +var postProgress = function() { + /*postMessage(new bitjs.archive.UnarchiveProgressEvent( + currentFilename, + currentFileNumber, + currentBytesUnarchivedInFile, + currentBytesUnarchived, + totalUncompressedBytesInArchive, + totalFilesInArchive));*/ +}; + +// shows a byte value as its hex representation +var nibble = "0123456789ABCDEF"; +var byteValueToHexString = function(num) { + return nibble[num >> 4] + nibble[num & 0xF]; +}; +var twoByteValueToHexString = function(num) { + return nibble[(num >> 12) & 0xF] + nibble[(num >> 8) & 0xF] + nibble[(num >> 4) & 0xF] + nibble[num & 0xF]; +}; + + +// Volume Types +var MAIN_HEAD = 0x01, + ENCRYPT_HEAD = 0x04, + FILE_HEAD = 0x02, + SERVICE_HEAD = 0x03, + // COMM_HEAD = 0x75, + // AV_HEAD = 0x76, + // SUB_HEAD = 0x77, + // PROTECT_HEAD = 0x78, + // SIGN_HEAD = 0x79, + // NEWSUB_HEAD = 0x7a, + ENDARC_HEAD = 0x05; + +// ============================================================================================== // + +var RarMainVolumeHeader = function(bstream) { + var headPos = bstream.bytePtr; + // byte 1,2 + info("Rar Volume Header @" + bstream.bytePtr); + + this.crc = bstream.readBits(16); + info(" crc=" + this.crc); + + // byte 3 + this.headType = bstream.readBits(8); + info(" headType=" + this.headType); + + // Get flags + // bytes 4,5 + this.flags = {}; + this.flags.value = bstream.readBits(16); + + // byte 6 + this.headSize = bstream.readBits(8); + // byte 7 + if (bstream.readBits(8) === 1) { + info(" RarVersion=5"); + } + // byte 8 + bstream.readBits(8); +} + +var vint = function(bstream) { + var size = 0; + var result = 0; + var loop = 0 ; + do { + size = bstream.readBits(8); + result |= (size & 0x7F) << (loop * 7); + loop++; + } while (size & 0x80 ) + return result; +} + +/** + * @param {bitjs.io.BitStream} bstream + * @constructor + */ +var RarVolumeHeader = function(bstream) { + var headPos = bstream.bytePtr; + // byte 1,2 + info("Rar Volume Header @" + bstream.bytePtr); + + this.crc = bstream.readBits(32); + info(" crc=" + this.crc); + + // byte 3 + x Header size + this.headSize = vint(bstream); + info(" Header Size=" + this.headSize); + + // byte 4 + this.headType = bstream.readBits(8); + info(" headType=" + this.headType); + + // Get Header flags + this.headFlags = {}; + this.headFlags.value = bstream.peekBits(8); + + info(" Header flags=" + byteValueToHexString(this.headFlags.value)); + this.headFlags.EXTRA_AREA = !!bstream.readBits(1); + this.headFlags.DATA_AREA = !!bstream.readBits(1); + this.headFlags.UNKNOWN = !!bstream.readBits(1); + this.headFlags.CONTINUE_FROM = !!bstream.readBits(1); + this.headFlags.CONTNUE_TO = !!bstream.readBits(1); + this.headFlags.DEPENDS = !!bstream.readBits(1); + this.headFlags.CHILDBLOCK = !!bstream.readBits(1); + bstream.readBits(1); // unused*/ + + // Get extra AreaSize + if (this.headFlags.EXTRA_AREA) { + this.extraSize = vint(bstream); + } else { + this.extraSize = 0; + } + if (this.headFlags.DATA_AREA && (this.headType == FILE_HEAD || this.headType == SERVICE_HEAD)) { + this.packSize = vint(bstream); + // this.packSize = bstream.readBits(32); + } + this.flags = {}; + this.flags.value = bstream.peekBits(8); + + switch (this.headType) { + case MAIN_HEAD: + // this.flags = {}; + // this.flags.value = bstream.peekBits(16); + this.flags.MHD_VOLUME = !!bstream.readBits(1); + this.flags.MHD_VOLUMNE_NO = !!bstream.readBits(1); + this.flags.MHD_SOLID = !!bstream.readBits(1); + this.flags.MHD_RECOVERY = !!bstream.readBits(1); + this.flags.MHD_LOCKED = !!bstream.readBits(1); + bstream.readBits(3); // unused*/ + if (this.flags.MHD_VOLUMNE_NO) { + this.volumeNumber = vint(bstream); + } + bstream.readBytes(this.extraSize); + // break; + return; // Main Header finally parsed + // ------------ + case FILE_HEAD: + case SERVICE_HEAD: + this.flags.DIRECTORY = !!bstream.readBits(1); + this.flags.TIME = !!bstream.readBits(1); + this.flags.CRC = !!bstream.readBits(1); + this.flags.UNPACK_UNKNOWN = !!bstream.readBits(1); + bstream.readBits(4); + + if (this.flags.UNPACK_UNKNOWN) { + vint(bstream); + } else { + this.unpackedSize = vint(bstream); + } + this.fileAttr = vint(bstream); + if (this.flags.TIME) { + this.fileTime = bstream.readBits(32); + } + if (this.flags.CRC) { + this.fileCRC = bstream.readBits(32); + } + // var compInfo = vint(bstream); + this.unpVer = bstream.readBits(6); + this.solid = bstream.readBits(1); + bstream.readBits(1); + this.method = bstream.readBits(3); + this.dictSize = bstream.readBits(4); + bstream.readBits(1); + this.hostOS = vint(bstream); + this.nameSize = vint(bstream); + + this.filename = bstream.readBytes(this.nameSize); + var _s = ""; + for (var _i = 0; _i < this.filename.length ; _i++) { + _s += String.fromCharCode(this.filename[_i]); + } + + this.filename = _s; + bstream.readBytes(this.extraSize); + break; + + default: + info("Found a header of type 0x" + byteValueToHexString(this.headType)); + // skip the rest of the header bytes (for now) + bstream.readBytes(this.headSize - 7); + break; + } +}; + +//var BLOCK_LZ = 0; + +var rLDecode = [0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 20, 24, 28, 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224], + rLBits = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5], + rDBitLengthCounts = [4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 14, 0, 12], + rSDDecode = [0, 4, 8, 16, 32, 64, 128, 192], + rSDBits = [2, 2, 3, 4, 5, 6, 6, 6]; + +var rDDecode = [0, 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, + 48, 64, 96, 128, 192, 256, 384, 512, 768, 1024, 1536, 2048, 3072, + 4096, 6144, 8192, 12288, 16384, 24576, 32768, 49152, 65536, 98304, + 131072, 196608, 262144, 327680, 393216, 458752, 524288, 589824, + 655360, 720896, 786432, 851968, 917504, 983040 +]; + +var rDBits = [0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, + 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, + 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16 +]; + +var rLowDistRepCount = 16; + +var rNC = 299, + rDC = 60, + rLDC = 17, + rRC = 28, + rBC = 20, + rHuffTableSize = (rNC + rDC + rRC + rLDC); + +//var UnpBlockType = BLOCK_LZ; +var UnpOldTable = new Array(rHuffTableSize); + +var BD = { //bitdecode + DecodeLen: new Array(16), + DecodePos: new Array(16), + DecodeNum: new Array(rBC) +}; +var LD = { //litdecode + DecodeLen: new Array(16), + DecodePos: new Array(16), + DecodeNum: new Array(rNC) +}; +var DD = { //distdecode + DecodeLen: new Array(16), + DecodePos: new Array(16), + DecodeNum: new Array(rDC) +}; +var LDD = { //low dist decode + DecodeLen: new Array(16), + DecodePos: new Array(16), + DecodeNum: new Array(rLDC) +}; +var RD = { //rep decode + DecodeLen: new Array(16), + DecodePos: new Array(16), + DecodeNum: new Array(rRC) +}; + +/** + * @type {Array} + */ +var rOldBuffers = []; + +/** + * The current buffer we are unpacking to. + * @type {bitjs.io.ByteBuffer} + */ +var rBuffer; + +/** + * The buffer of the final bytes after filtering (only used in Unpack29). + * @type {bitjs.io.ByteBuffer} + */ +var wBuffer; + +var lowDistRepCount = 0; +var prevLowDist = 0; + +var rOldDist = [0, 0, 0, 0]; +var lastDist; +var lastLength; + +/** + * In unpack.cpp, UnpPtr keeps track of what bytes have been unpacked + * into the Window buffer and WrPtr keeps track of what bytes have been + * actually written to disk after the unpacking and optional filtering + * has been done. + * + * In our case, rBuffer is the buffer for the unpacked bytes and wBuffer is + * the final output bytes. + */ + + +/** + * Read in Huffman tables for RAR + * @param {bitjs.io.BitStream} bstream + */ +function rarReadTables(bstream) { + var BitLength = new Array(rBC); + var Table = new Array(rHuffTableSize); + var i; + // before we start anything we need to get byte-aligned + bstream.readBits((8 - bstream.bitPtr) & 0x7); + + if (bstream.readBits(1)) { + info("Error! PPM not implemented yet"); + return; + } + + if (!bstream.readBits(1)) { //discard old table + for (i = UnpOldTable.length; i--;) { + UnpOldTable[i] = 0; + } + } + + // read in bit lengths + for (var I = 0; I < rBC; ++I) { + var Length = bstream.readBits(4); + if (Length === 15) { + var ZeroCount = bstream.readBits(4); + if (ZeroCount === 0) { + BitLength[I] = 15; + } else { + ZeroCount += 2; + while (ZeroCount-- > 0 && I < rBC) { + BitLength[I++] = 0; + } + --I; + } + } else { + BitLength[I] = Length; + } + } + + // now all 20 bit lengths are obtained, we construct the Huffman Table: + + rarMakeDecodeTables(BitLength, 0, BD, rBC); + + var TableSize = rHuffTableSize; + //console.log(DecodeLen, DecodePos, DecodeNum); + for (i = 0; i < TableSize;) { + var N; + var num = rarDecodeNumber(bstream, BD); + if (num < 16) { + Table[i] = (num + UnpOldTable[i]) & 0xf; + i++; + } else if (num < 18) { + N = (num === 16) ? (bstream.readBits(3) + 3) : (bstream.readBits(7) + 11); + + while (N-- > 0 && i < TableSize) { + Table[i] = Table[i - 1]; + i++; + } + } else { + N = (num === 18) ? (bstream.readBits(3) + 3) : (bstream.readBits(7) + 11); + + while (N-- > 0 && i < TableSize) { + Table[i++] = 0; + } + } + } + + rarMakeDecodeTables(Table, 0, LD, rNC); + rarMakeDecodeTables(Table, rNC, DD, rDC); + rarMakeDecodeTables(Table, rNC + rDC, LDD, rLDC); + rarMakeDecodeTables(Table, rNC + rDC + rLDC, RD, rRC); + + for (i = UnpOldTable.length; i--;) { + UnpOldTable[i] = Table[i]; + } + return true; +} + + +function rarDecodeNumber(bstream, dec) { + var DecodeLen = dec.DecodeLen, + DecodePos = dec.DecodePos, + DecodeNum = dec.DecodeNum; + var bitField = bstream.getBits() & 0xfffe; + //some sort of rolled out binary search + var bits = ((bitField < DecodeLen[8]) ? + ((bitField < DecodeLen[4]) ? + ((bitField < DecodeLen[2]) ? + ((bitField < DecodeLen[1]) ? 1 : 2) : + ((bitField < DecodeLen[3]) ? 3 : 4)) : + (bitField < DecodeLen[6]) ? + ((bitField < DecodeLen[5]) ? 5 : 6) : + ((bitField < DecodeLen[7]) ? 7 : 8)) : + ((bitField < DecodeLen[12]) ? + ((bitField < DecodeLen[10]) ? + ((bitField < DecodeLen[9]) ? 9 : 10) : + ((bitField < DecodeLen[11]) ? 11 : 12)) : + (bitField < DecodeLen[14]) ? + ((bitField < DecodeLen[13]) ? 13 : 14) : + 15)); + bstream.readBits(bits); + var N = DecodePos[bits] + ((bitField - DecodeLen[bits - 1]) >>> (16 - bits)); + + return DecodeNum[N]; +} + + +function rarMakeDecodeTables(BitLength, offset, dec, size) { + var DecodeLen = dec.DecodeLen; + var DecodePos = dec.DecodePos; + var DecodeNum = dec.DecodeNum; + var LenCount = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]; + var TmpPos = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]; + var N = 0; + var M = 0; + var i; + for (i = DecodeNum.length; i--;) { + DecodeNum[i] = 0; + } + for (i = 0; i < size; i++) { + LenCount[BitLength[i + offset] & 0xF]++; + } + LenCount[0] = 0; + TmpPos[0] = 0; + DecodePos[0] = 0; + DecodeLen[0] = 0; + + var I; + for (I = 1; I < 16; ++I) { + N = 2 * (N + LenCount[I]); + M = (N << (15 - I)); + if (M > 0xFFFF) { + M = 0xFFFF; + } + DecodeLen[I] = M; + DecodePos[I] = DecodePos[I - 1] + LenCount[I - 1]; + TmpPos[I] = DecodePos[I]; + } + for (I = 0; I < size; ++I) { + if (BitLength[I + offset] !== 0) { + DecodeNum[TmpPos[BitLength[offset + I] & 0xF]++] = I; + } + } + +} + +// TODO: implement +/** + * @param {bitjs.io.BitStream} bstream + * @param {boolean} Solid + */ +function unpack15() { //bstream, Solid) { + info("ERROR! RAR 1.5 compression not supported"); +} + +/** + * Unpacks the bit stream into rBuffer using the Unpack20 algorithm. + * @param {bitjs.io.BitStream} bstream + * @param {boolean} Solid + */ +function unpack20(bstream) { //, Solid) { + var destUnpSize = rBuffer.data.length; + var oldDistPtr = 0; + var Length; + var Distance; + rarReadTables20(bstream); + while (destUnpSize > rBuffer.ptr) { + var num = rarDecodeNumber(bstream, LD); + var Bits; + if (num < 256) { + rBuffer.insertByte(num); + continue; + } + if (num > 269) { + Length = rLDecode[num -= 270] + 3; + if ((Bits = rLBits[num]) > 0) { + Length += bstream.readBits(Bits); + } + var DistNumber = rarDecodeNumber(bstream, DD); + Distance = rDDecode[DistNumber] + 1; + if ((Bits = rDBits[DistNumber]) > 0) { + Distance += bstream.readBits(Bits); + } + if (Distance >= 0x2000) { + Length++; + if (Distance >= 0x40000) { + Length++; + } + } + lastLength = Length; + lastDist = rOldDist[oldDistPtr++ & 3] = Distance; + rarCopyString(Length, Distance); + continue; + } + if (num === 269) { + rarReadTables20(bstream); + rarUpdateProgress(); + continue; + } + if (num === 256) { + lastDist = rOldDist[oldDistPtr++ & 3] = lastDist; + rarCopyString(lastLength, lastDist); + continue; + } + if (num < 261) { + Distance = rOldDist[(oldDistPtr - (num - 256)) & 3]; + var LengthNumber = rarDecodeNumber(bstream, RD); + Length = rLDecode[LengthNumber] + 2; + if ((Bits = rLBits[LengthNumber]) > 0) { + Length += bstream.readBits(Bits); + } + if (Distance >= 0x101) { + Length++; + if (Distance >= 0x2000) { + Length++; + if (Distance >= 0x40000) { + Length++; + } + } + } + lastLength = Length; + lastDist = rOldDist[oldDistPtr++ & 3] = Distance; + rarCopyString(Length, Distance); + continue; + } + if (num < 270) { + Distance = rSDDecode[num -= 261] + 1; + if ((Bits = rSDBits[num]) > 0) { + Distance += bstream.readBits(Bits); + } + lastLength = 2; + lastDist = rOldDist[oldDistPtr++ & 3] = Distance; + rarCopyString(2, Distance); + continue; + } + } + rarUpdateProgress(); +} + +function rarUpdateProgress() { + var change = rBuffer.ptr - currentBytesUnarchivedInFile; + currentBytesUnarchivedInFile = rBuffer.ptr; + currentBytesUnarchived += change; + postProgress(); +} + +var rNC20 = 298, + rDC20 = 48, + rRC20 = 28, + rBC20 = 19, + rMC20 = 257; + +var UnpOldTable20 = new Array(rMC20 * 4); + +function rarReadTables20(bstream) { + var BitLength = new Array(rBC20); + var Table = new Array(rMC20 * 4); + var TableSize, N, I; + var i; + bstream.readBits(1); + if (!bstream.readBits(1)) { + for (i = UnpOldTable20.length; i--;) { + UnpOldTable20[i] = 0; + } + } + TableSize = rNC20 + rDC20 + rRC20; + for (I = 0; I < rBC20; I++) { + BitLength[I] = bstream.readBits(4); + } + rarMakeDecodeTables(BitLength, 0, BD, rBC20); + I = 0; + while (I < TableSize) { + var num = rarDecodeNumber(bstream, BD); + if (num < 16) { + Table[I] = num + UnpOldTable20[I] & 0xf; + I++; + } else if (num === 16) { + N = bstream.readBits(2) + 3; + while (N-- > 0 && I < TableSize) { + Table[I] = Table[I - 1]; + I++; + } + } else { + if (num === 17) { + N = bstream.readBits(3) + 3; + } else { + N = bstream.readBits(7) + 11; + } + while (N-- > 0 && I < TableSize) { + Table[I++] = 0; + } + } + } + rarMakeDecodeTables(Table, 0, LD, rNC20); + rarMakeDecodeTables(Table, rNC20, DD, rDC20); + rarMakeDecodeTables(Table, rNC20 + rDC20, RD, rRC20); + for (i = UnpOldTable20.length; i--;) { + UnpOldTable20[i] = Table[i]; + } +} + +// ============================================================================================== // + +// Unpack code specific to RarVM +var VM = new RarVM(); + +/** + * Filters code, one entry per filter. + * @type {Array} + */ +var Filters = []; + +/** + * Filters stack, several entrances of same filter are possible. + * @type {Array} + */ +var PrgStack = []; + +/** + * Lengths of preceding blocks, one length per filter. Used to reduce + * size required to write block length if lengths are repeating. + * @type {Array} + */ +var OldFilterLengths = []; + +var LastFilter = 0; + +function initFilters() { + OldFilterLengths = []; + LastFilter = 0; + Filters = []; + PrgStack = []; +} + + +/** + * @param {number} firstByte The first byte (flags). + * @param {Uint8Array} vmCode An array of bytes. + */ +function rarAddVMCode(firstByte, vmCode) { + VM.init(); + var i; + var bstream = new bitjs.io.BitStream(vmCode.buffer, true /* rtl */ ); + + var filtPos; + if (firstByte & 0x80) { + filtPos = RarVM.readData(bstream); + if (filtPos === 0) { + initFilters(); + } else { + filtPos--; + } + } else { + filtPos = LastFilter; + } + + if (filtPos > Filters.length || filtPos > OldFilterLengths.length) { + return false; + } + + LastFilter = filtPos; + var newFilter = (filtPos === Filters.length); + + // new filter for PrgStack + var stackFilter = new UnpackFilter(); + var filter = null; + // new filter code, never used before since VM reset + if (newFilter) { + // too many different filters, corrupt archive + if (filtPos > 1024) { + return false; + } + + filter = new UnpackFilter(); + Filters.push(filter); + stackFilter.ParentFilter = (Filters.length - 1); + OldFilterLengths.push(0); // OldFilterLengths.Add(1) + filter.ExecCount = 0; + } else { // filter was used in the past + filter = Filters[filtPos]; + stackFilter.ParentFilter = filtPos; + filter.ExecCount++; + } + + var emptyCount = 0; + for (i = 0; i < PrgStack.length; ++i) { + PrgStack[i - emptyCount] = PrgStack[i]; + + if (PrgStack[i] === null) { + emptyCount++; + } + if (emptyCount > 0) { + PrgStack[i] = null; + } + } + + if (emptyCount === 0) { + PrgStack.push(null); //PrgStack.Add(1); + emptyCount = 1; + } + + var stackPos = PrgStack.length - emptyCount; + PrgStack[stackPos] = stackFilter; + stackFilter.ExecCount = filter.ExecCount; + + var blockStart = RarVM.readData(bstream); + if (firstByte & 0x40) { + blockStart += 258; + } + stackFilter.BlockStart = (blockStart + rBuffer.ptr) & MAXWINMASK; + + if (firstByte & 0x20) { + stackFilter.BlockLength = RarVM.readData(bstream); + } else { + stackFilter.BlockLength = filtPos < OldFilterLengths.length ? + OldFilterLengths[filtPos] : + 0; + } + stackFilter.NextWindow = (wBuffer.ptr !== rBuffer.ptr) && + (((wBuffer.ptr - rBuffer.ptr) & MAXWINMASK) <= blockStart); + + OldFilterLengths[filtPos] = stackFilter.BlockLength; + + for (i = 0; i < 7; ++i) { + stackFilter.Prg.InitR[i] = 0; + } + stackFilter.Prg.InitR[3] = VM_GLOBALMEMADDR; + stackFilter.Prg.InitR[4] = stackFilter.BlockLength; + stackFilter.Prg.InitR[5] = stackFilter.ExecCount; + + // set registers to optional parameters if any + if (firstByte & 0x10) { + var initMask = bstream.readBits(7); + for (i = 0; i < 7; ++i) { + if (initMask & (1 << i)) { + stackFilter.Prg.InitR[i] = RarVM.readData(bstream); + } + } + } + + if (newFilter) { + var vmCodeSize = RarVM.readData(bstream); + if (vmCodeSize >= 0x10000 || vmCodeSize === 0) { + return false; + } + vmCode = new Uint8Array(vmCodeSize); + for (i = 0; i < vmCodeSize; ++i) { + //if (Inp.Overflow(3)) + // return(false); + vmCode[i] = bstream.readBits(8); + } + VM.prepare(vmCode, filter.Prg); + } + stackFilter.Prg.Cmd = filter.Prg.Cmd; + stackFilter.Prg.AltCmd = filter.Prg.Cmd; + + var staticDataSize = filter.Prg.StaticData.length; + if (staticDataSize > 0 && staticDataSize < VM_GLOBALMEMSIZE) { + // read statically defined data contained in DB commands + for (i = 0; i < staticDataSize; ++i) { + stackFilter.Prg.StaticData[i] = filter.Prg.StaticData[i]; + } + } + + if (stackFilter.Prg.GlobalData.length < VM_FIXEDGLOBALSIZE) { + stackFilter.Prg.GlobalData = new Uint8Array(VM_FIXEDGLOBALSIZE); + } + + var globalData = stackFilter.Prg.GlobalData; + for (i = 0; i < 7; ++i) { + VM.setLowEndianValue(globalData, stackFilter.Prg.InitR[i], i * 4); + } + + VM.setLowEndianValue(globalData, stackFilter.BlockLength, 0x1c); + VM.setLowEndianValue(globalData, 0, 0x20); + VM.setLowEndianValue(globalData, stackFilter.ExecCount, 0x2c); + for (i = 0; i < 16; ++i) { + globalData[0x30 + i] = 0; + } + + // put data block passed as parameter if any + if (firstByte & 8) { + //if (Inp.Overflow(3)) + // return(false); + var dataSize = RarVM.readData(bstream); + if (dataSize > (VM_GLOBALMEMSIZE - VM_FIXEDGLOBALSIZE)) { + return (false); + } + + var curSize = stackFilter.Prg.GlobalData.length; + if (curSize < dataSize + VM_FIXEDGLOBALSIZE) { + // Resize global data and update the stackFilter and local variable. + var numBytesToAdd = dataSize + VM_FIXEDGLOBALSIZE - curSize; + var newGlobalData = new Uint8Array(globalData.length + numBytesToAdd); + newGlobalData.set(globalData); + + stackFilter.Prg.GlobalData = newGlobalData; + globalData = newGlobalData; + } + //byte *GlobalData=&StackFilter->Prg.GlobalData[VM_FIXEDGLOBALSIZE]; + for (i = 0; i < dataSize; ++i) { + //if (Inp.Overflow(3)) + // return(false); + globalData[VM_FIXEDGLOBALSIZE + i] = bstream.readBits(8); + } + } + + return true; +} + + +/** + * @param {!bitjs.io.BitStream} bstream + */ +function rarReadVMCode(bstream) { + var firstByte = bstream.readBits(8); + var length = (firstByte & 7) + 1; + if (length === 7) { + length = bstream.readBits(8) + 7; + } else if (length === 8) { + length = bstream.readBits(16); + } + + // Read all bytes of VM code into an array. + var vmCode = new Uint8Array(length); + for (var i = 0; i < length; i++) { + // Do something here with checking readbuf. + vmCode[i] = bstream.readBits(8); + } + return rarAddVMCode(firstByte, vmCode); +} + +/** + * Unpacks the bit stream into rBuffer using the Unpack29 algorithm. + * @param {bitjs.io.BitStream} bstream + * @param {boolean} Solid + */ +function unpack29(bstream) { + // lazy initialize rDDecode and rDBits + + var DDecode = new Array(rDC); + var DBits = new Array(rDC); + var Distance = 0; + var Length = 0; + var Dist = 0, BitLength = 0, Slot = 0; + var I; + for (I = 0; I < rDBitLengthCounts.length; I++, BitLength++) { + for (var J = 0; J < rDBitLengthCounts[I]; J++, Slot++, Dist += (1 << BitLength)) { + DDecode[Slot] = Dist; + DBits[Slot] = BitLength; + } + } + + var Bits; + //tablesRead = false; + + rOldDist = [0, 0, 0, 0]; + + lastDist = 0; + lastLength = 0; + var i; + for (i = UnpOldTable.length; i--;) { + UnpOldTable[i] = 0; + } + + // read in Huffman tables + rarReadTables(bstream); + + while (true) { + var num = rarDecodeNumber(bstream, LD); + + if (num < 256) { + rBuffer.insertByte(num); + continue; + } + if (num >= 271) { + Length = rLDecode[num -= 271] + 3; + if ((Bits = rLBits[num]) > 0) { + Length += bstream.readBits(Bits); + } + var DistNumber = rarDecodeNumber(bstream, DD); + Distance = DDecode[DistNumber] + 1; + if ((Bits = DBits[DistNumber]) > 0) { + if (DistNumber > 9) { + if (Bits > 4) { + Distance += ((bstream.getBits() >>> (20 - Bits)) << 4); + bstream.readBits(Bits - 4); + //todo: check this + } + if (lowDistRepCount > 0) { + lowDistRepCount--; + Distance += prevLowDist; + } else { + var LowDist = rarDecodeNumber(bstream, LDD); + if (LowDist === 16) { + lowDistRepCount = rLowDistRepCount - 1; + Distance += prevLowDist; + } else { + Distance += LowDist; + prevLowDist = LowDist; + } + } + } else { + Distance += bstream.readBits(Bits); + } + } + if (Distance >= 0x2000) { + Length++; + if (Distance >= 0x40000) { + Length++; + } + } + rarInsertOldDist(Distance); + rarInsertLastMatch(Length, Distance); + rarCopyString(Length, Distance); + continue; + } + if (num === 256) { + if (!rarReadEndOfBlock(bstream)) { + break; + } + continue; + } + if (num === 257) { + if (!rarReadVMCode(bstream)) { + break; + } + continue; + } + if (num === 258) { + if (lastLength !== 0) { + rarCopyString(lastLength, lastDist); + } + continue; + } + if (num < 263) { + var DistNum = num - 259; + Distance = rOldDist[DistNum]; + + for (var I2 = DistNum; I2 > 0; I2--) { + rOldDist[I2] = rOldDist[I2 - 1]; + } + rOldDist[0] = Distance; + + var LengthNumber = rarDecodeNumber(bstream, RD); + Length = rLDecode[LengthNumber] + 2; + if ((Bits = rLBits[LengthNumber]) > 0) { + Length += bstream.readBits(Bits); + } + rarInsertLastMatch(Length, Distance); + rarCopyString(Length, Distance); + continue; + } + if (num < 272) { + Distance = rSDDecode[num -= 263] + 1; + if ((Bits = rSDBits[num]) > 0) { + Distance += bstream.readBits(Bits); + } + rarInsertOldDist(Distance); + rarInsertLastMatch(2, Distance); + rarCopyString(2, Distance); + continue; + } + } // while (true) + rarUpdateProgress(); + rarWriteBuf(); +} + +/** + * Does stuff to the current byte buffer (rBuffer) based on + * the filters loaded into the RarVM and writes out to wBuffer. + */ +function rarWriteBuf() { + var writeSize = (rBuffer.ptr & MAXWINMASK); + var j; + var flt; + for (var i = 0; i < PrgStack.length; ++i) { + flt = PrgStack[i]; + if (flt === null) { + continue; + } + + if (flt.NextWindow) { + flt.NextWindow = false; + continue; + } + + var blockStart = flt.BlockStart; + var blockLength = flt.BlockLength; + var parentPrg; + + // WrittenBorder = wBuffer.ptr + if (((blockStart - wBuffer.ptr) & MAXWINMASK) < writeSize) { + if (wBuffer.ptr !== blockStart) { + // Copy blockStart bytes from rBuffer into wBuffer. + rarWriteArea(wBuffer.ptr, blockStart); + writeSize = (rBuffer.ptr - wBuffer.ptr) & MAXWINMASK; + } + if (blockLength <= writeSize) { + var blockEnd = (blockStart + blockLength) & MAXWINMASK; + if (blockStart < blockEnd || blockEnd === 0) { + VM.setMemory(0, rBuffer.data.subarray(blockStart, blockStart + blockLength), blockLength); + } else { + var firstPartLength = MAXWINSIZE - blockStart; + VM.setMemory(0, rBuffer.data.subarray(blockStart, blockStart + firstPartLength), firstPartLength); + VM.setMemory(firstPartLength, rBuffer.data, blockEnd); + } + + parentPrg = Filters[flt.ParentFilter].Prg; + var prg = flt.Prg; + + if (parentPrg.GlobalData.length > VM_FIXEDGLOBALSIZE) { + // Copy global data from previous script execution if any. + prg.GlobalData = new Uint8Array(parentPrg.GlobalData); + } + + rarExecuteCode(prg); + var globalDataLen; + + if (prg.GlobalData.length > VM_FIXEDGLOBALSIZE) { + // Save global data for next script execution. + globalDataLen = prg.GlobalData.length; + if (parentPrg.GlobalData.length < globalDataLen) { + parentPrg.GlobalData = new Uint8Array(globalDataLen); + } + parentPrg.GlobalData.set( + this.mem_.subarray(VM_FIXEDGLOBALSIZE, VM_FIXEDGLOBALSIZE + globalDataLen), + VM_FIXEDGLOBALSIZE); + } else { + parentPrg.GlobalData = new Uint8Array(0); + } + + var filteredData = prg.FilteredData; + + PrgStack[i] = null; + while (i + 1 < PrgStack.length) { + var nextFilter = PrgStack[i + 1]; + if (nextFilter === null || nextFilter.BlockStart !== blockStart || + nextFilter.BlockLength !== filteredData.length || nextFilter.NextWindow) { + break; + } + + // Apply several filters to same data block. + + VM.setMemory(0, filteredData, filteredData.length); + + parentPrg = Filters[nextFilter.ParentFilter].Prg; + var nextPrg = nextFilter.Prg; + + globalDataLen = parentPrg.GlobalData.length; + if (globalDataLen > VM_FIXEDGLOBALSIZE) { + // Copy global data from previous script execution if any. + nextPrg.GlobalData = new Uint8Array(globalDataLen); + nextPrg.GlobalData.set(parentPrg.GlobalData.subarray(VM_FIXEDGLOBALSIZE, VM_FIXEDGLOBALSIZE + globalDataLen), VM_FIXEDGLOBALSIZE); + } + + rarExecuteCode(nextPrg); + + if (nextPrg.GlobalData.length > VM_GLOBALMEMSIZE) { + // Save global data for next script execution. + globalDataLen = nextPrg.GlobalData.length; + if (parentPrg.GlobalData.length < globalDataLen) { + parentPrg.GlobalData = new Uint8Array(globalDataLen); + } + parentPrg.GlobalData.set( + this.mem_.subarray(VM_FIXEDGLOBALSIZE, VM_FIXEDGLOBALSIZE + globalDataLen), + VM_FIXEDGLOBALSIZE); + } else { + parentPrg.GlobalData = new Uint8Array(0); + } + + filteredData = nextPrg.FilteredData; + i++; + PrgStack[i] = null; + } // while (i + 1 < PrgStack.length) + + for (j = 0; j < filteredData.length; ++j) { + wBuffer.insertByte(filteredData[j]); + } + writeSize = (rBuffer.ptr - wBuffer.ptr) & MAXWINMASK; + } else { // if (blockLength <= writeSize) + for (j = i; j < PrgStack.length; ++j) { + flt = PrgStack[j]; + if (flt !== null && flt.NextWindow) { + flt.NextWindow = false; + } + } + //WrPtr=WrittenBorder; + return; + } + } // if (((blockStart - wBuffer.ptr) & MAXWINMASK) < writeSize) + } // for (var i = 0; i < PrgStack.length; ++i) + + // Write any remaining bytes from rBuffer to wBuffer; + rarWriteArea(wBuffer.ptr, rBuffer.ptr); + + // Now that the filtered buffer has been written, swap it back to rBuffer. + rBuffer = wBuffer; +} + +/** + * Copy bytes from rBuffer to wBuffer. + * @param {number} startPtr The starting point to copy from rBuffer. + * @param {number} endPtr The ending point to copy from rBuffer. + */ +function rarWriteArea(startPtr, endPtr) { + if (endPtr < startPtr) { + console.error("endPtr < startPtr, endPtr=" + endPtr + ", startPtr=" + startPtr); + // rarWriteData(startPtr, -(int)StartPtr & MAXWINMASK); + // RarWriteData(0, endPtr); + return; + } else if (startPtr < endPtr) { + rarWriteData(startPtr, endPtr - startPtr); + } +} + +/** + * Writes bytes into wBuffer from rBuffer. + * @param {number} offset The starting point to copy bytes from rBuffer. + * @param {number} numBytes The number of bytes to copy. + */ +function rarWriteData(offset, numBytes) { + if (wBuffer.ptr >= rBuffer.data.length) { + return; + } + var leftToWrite = rBuffer.data.length - wBuffer.ptr; + if (numBytes > leftToWrite) { + numBytes = leftToWrite; + } + for (var i = 0; i < numBytes; ++i) { + wBuffer.insertByte(rBuffer.data[offset + i]); + } +} + +/** + * @param {VM_PreparedProgram} prg + */ +function rarExecuteCode(prg) { + if (prg.GlobalData.length > 0) { + var writtenFileSize = wBuffer.ptr; + prg.InitR[6] = writtenFileSize; + VM.setLowEndianValue(prg.GlobalData, writtenFileSize, 0x24); + VM.setLowEndianValue(prg.GlobalData, (writtenFileSize >>> 32) >> 0, 0x28); + VM.execute(prg); + } +} + +function rarReadEndOfBlock(bstream) { + rarUpdateProgress(); + + var NewTable = false, + NewFile = false; + if (bstream.readBits(1)) { + NewTable = true; + } else { + NewFile = true; + NewTable = !!bstream.readBits(1); + } + //tablesRead = !NewTable; + return !(NewFile || (NewTable && !rarReadTables(bstream))); +} + +function rarInsertLastMatch(length, distance) { + lastDist = distance; + lastLength = length; +} + +function rarInsertOldDist(distance) { + rOldDist.splice(3, 1); + rOldDist.splice(0, 0, distance); +} + +/** + * Copies len bytes from distance bytes ago in the buffer to the end of the + * current byte buffer. + * @param {number} length How many bytes to copy. + * @param {number} distance How far back in the buffer from the current write + * pointer to start copying from. + */ +function rarCopyString(length, distance) { + var srcPtr = rBuffer.ptr - distance; + if (srcPtr < 0) { + var l = rOldBuffers.length; + while (srcPtr < 0) { + srcPtr = rOldBuffers[--l].data.length + srcPtr; + } + // TODO: lets hope that it never needs to read beyond file boundaries + while (length--) { + rBuffer.insertByte(rOldBuffers[l].data[srcPtr++]); + } + } + if (length > distance) { + while (length--) { + rBuffer.insertByte(rBuffer.data[srcPtr++]); + } + } else { + rBuffer.insertBytes(rBuffer.data.subarray(srcPtr, srcPtr + length)); + } +} + +/** + * @param {RarLocalFile} v + */ +function unpack(v) { + // TODO: implement what happens when unpVer is < 15 + var Ver = v.header.unpVer <= 15 ? 15 : v.header.unpVer; + // var Solid = v.header.LHD_SOLID; + var bstream = new bitjs.io.BitStream(v.fileData.buffer, true /* rtl */, v.fileData.byteOffset, v.fileData.byteLength); + + rBuffer = new bitjs.io.ByteBuffer(v.header.unpackedSize); + + info("Unpacking " + v.filename + " RAR v" + Ver); + + switch (Ver) { + case 15: // rar 1.5 compression + unpack15(); //(bstream, Solid); + break; + case 20: // rar 2.x compression + case 26: // files larger than 2GB + unpack20(bstream); //, Solid); + break; + case 29: // rar 3.x compression + case 36: // alternative hash + wBuffer = new bitjs.io.ByteBuffer(rBuffer.data.length); + unpack29(bstream); + break; + } // switch(method) + + rOldBuffers.push(rBuffer); + // TODO: clear these old buffers when there's over 4MB of history + return rBuffer.data; +} + +// bstream is a bit stream +var RarLocalFile = function(bstream) { + this.header = new RarVolumeHeader(bstream); + this.filename = this.header.filename; + + if (this.header.headType !== FILE_HEAD && this.header.headType !== ENDARC_HEAD && this.header.headType !== SERVICE_HEAD) { + this.isValid = false; + info("Error! RAR Volume did not include a FILE_HEAD header "); + } else { + // read in the compressed data + this.fileData = null; + if (this.header.packSize > 0) { + this.fileData = bstream.readBytes(this.header.packSize); + if (this.header.headType === FILE_HEAD) { + this.isValid = true; + } + } + } +}; + +RarLocalFile.prototype.unrar5 = function() { + //if (!this.header.flags.LHD_SPLIT_BEFORE) { + // unstore file + // No compression + if (this.header.method === 0x00) { + info("Unstore " + this.filename); + this.isValid = true; + + currentBytesUnarchivedInFile += this.fileData.length; + currentBytesUnarchived += this.fileData.length; + + // Create a new buffer and copy it over. + var len = this.header.packSize; + var newBuffer = new bitjs.io.ByteBuffer(len); + newBuffer.insertBytes(this.fileData); + this.fileData = newBuffer.data; + } else { + this.isValid = true; + this.fileData = unpack(this); + } + //} +}; + +var unrar5 = function(arrayBuffer) { + currentFilename = ""; + currentFileNumber = 0; + currentBytesUnarchivedInFile = 0; + currentBytesUnarchived = 0; + totalUncompressedBytesInArchive = 0; + totalFilesInArchive = 0; + + // postMessage(new bitjs.archive.UnarchiveStartEvent()); + var bstream = new bitjs.io.BitStream(arrayBuffer, false /* rtl */); + + var header = new RarMainVolumeHeader(bstream); + if (header.crc === 0x6152 && + header.headType === 0x72 && + header.flags.value === 0x1A21 && + header.headSize === 7) { + info("Found RAR signature"); + + var mhead = new RarVolumeHeader(bstream); + if (mhead.headType !== MAIN_HEAD) { + info("Error! RAR did not include a MAIN_HEAD header"); + } else { + var localFiles = []; + var localFile = null; + do { + try { + localFile = new RarLocalFile(bstream); + info("RAR localFile isValid=" + localFile.isValid + ", volume packSize=" + localFile.header.packSize); + if (localFile && localFile.isValid && localFile.header.packSize > 0) { + totalUncompressedBytesInArchive += localFile.header.unpackedSize; + localFiles.push(localFile); + } else if (localFile.header.packSize === 0 && localFile.header.unpackedSize === 0) { + localFile.isValid = true; + } + } catch (err) { + break; + } + //info("bstream" + bstream.bytePtr+"/"+bstream.bytes.length); + } while (localFile.isValid); + totalFilesInArchive = localFiles.length; + + // now we have all information but things are unpacked + localFiles.sort(alphanumCase); + + info(localFiles.map(function(a) { + return a.filename; + }).join(", ")); + for (var i = 0; i < localFiles.length; ++i) { + var localfile = localFiles[i]; + + // update progress + currentFilename = localfile.header.filename; + currentBytesUnarchivedInFile = 0; + + // actually do the unzipping + localfile.unrar5(); + + if (localfile.isValid) { + postMessage(new bitjs.archive.UnarchiveExtractEvent(localfile)); + postProgress(); + } + } + + postProgress(); + } + } else { + err("Invalid RAR file"); + } + // postMessage(new bitjs.archive.UnarchiveFinishEvent()); +}; + +// event.data.file has the ArrayBuffer. +onmessage = function(event) { + var ab = event.data.file; + unrar5(ab, true); +}; diff --git a/cps/static/js/caliBlur.js b/cps/static/js/caliBlur.js index 73e1c320..f7ab8539 100644 --- a/cps/static/js/caliBlur.js +++ b/cps/static/js/caliBlur.js @@ -145,44 +145,16 @@ if ($("body.book").length > 0) { $(".blur-wrapper") .prepend('
Blurred cover
'); - // Fix-up book detail headings - publisher = $(".publishers p span").text().split(":"); - $(".publishers p span").remove(); - $.each(publisher, function (i, val) { - $(".publishers").append("" + publisher[i] + ""); - }); - $(".publishers span:nth-child(3)").text(function () { - return $(this).text().replace(/^\s+|^\t+|\t+|\s+$/g, ""); - }); - - // Fix-up book custom colums headings - // real_custom_column = $( '.real_custom_columns' ).text().split( ':' ); - real_custom_column = $(".real_custom_columns"); - // $( ".real_custom_columns" ).remove(); - $.each(real_custom_column, function (i, val) { - var split = $(this).text().split(":"); - real_cc_key = split.shift(); - real_cc_value = split.join(":"); - $(this).text(""); - if (real_cc_value != "") { - $(this).append("" + real_cc_key + "" + real_cc_value + ""); + // Metadata Fields - Publishers, Published, Languages and Custom + $('.publishers, .publishing-date, .real_custom_columns, .languages').each(function () { + var splitText = $(this).text().split(':'); + var label = splitText.shift().trim(); + var value = splitText.join(':').trim(); + // Preserve Links + if ($(this).find('a').length) { + value = $(this).find('a').first().removeClass(); } - }); - //$( '.real_custom_columns:nth-child(3)' ).text(function() { - //return $(this).text().replace(/^\s+|^\t+|\t+|\s+$/g, ""); - //}); - - published = $(".publishing-date p") - .text().split(": "); - $(".publishing-date p").remove(); - $.each(published, function (i, val) { - $(".publishing-date").append("" + published[i] + ""); - }); - - languages = $(".languages p span").text().split(": "); - $(".languages p span").remove(); - $.each(languages, function (i, val) { - $(".languages").append("" + languages[i] + ""); + $(this).html('' + label + '').find('span').last().append(value); }); $(".book-meta h2:first").clone() @@ -246,11 +218,6 @@ if ($("body.book").length > 0) { $("#add-to-shelves").toggle(); }); - // Fix formatting error on book detail languages - if (!$(".book-meta > .bookinfo > .languages > span:last-of-type").text().startsWith(" ")) { - $(".book-meta > .bookinfo > .languages > span:last-of-type").prepend(" "); - } - //Work to reposition dropdowns. Does not currently solve for //screen resizing function dropdownToggle() { @@ -710,7 +677,7 @@ $(".navbar-collapse.collapse.in").before('') // Get rid of leading white space recentlyAdded = $("#nav_new a:contains('Recently')").text().trim(); $("#nav_new a:contains('Recently')").contents().filter(function () { - return this.nodeType == 3 + return this.nodeType === 3 }).each(function () { this.textContent = this.textContent.replace(" Recently Added", recentlyAdded); }); diff --git a/cps/static/js/edit_books.js b/cps/static/js/edit_books.js index 35515aa1..8cedf688 100644 --- a/cps/static/js/edit_books.js +++ b/cps/static/js/edit_books.js @@ -1,7 +1,7 @@ /** * Created by SpeedProg on 05.04.2015. */ -/* global Bloodhound, language, Modernizr, tinymce */ +/* global Bloodhound, language, Modernizr, tinymce, getPath */ if ($("#description").length) { tinymce.init({ @@ -78,10 +78,10 @@ function prefixedSource(prefix, query, cb, bhAdapter) { }); } -function getPath() { +/*function getPath() { var jsFileLocation = $("script[src*=edit_books]").attr("src"); // the js file path return jsFileLocation.substr(0, jsFileLocation.search("/static/js/edit_books.js")); // the js folder path -} +}*/ var authors = new Bloodhound({ name: "authors", @@ -249,18 +249,26 @@ promisePublishers.done(function() { ); }); -$("#search").on("change input.typeahead:selected", function() { +$("#search").on("change input.typeahead:selected", function(event) { + if (event.target.type === "search" && event.target.tagName === "INPUT") { + return; + } var form = $("form").serialize(); $.getJSON( getPath() + "/get_matching_tags", form, function( data ) { $(".tags_click").each(function() { - if ($.inArray(parseInt($(this).children("input").first().val(), 10), data.tags) === -1 ) { - if (!($(this).hasClass("active"))) { - $(this).addClass("disabled"); + if ($.inArray(parseInt($(this).val(), 10), data.tags) === -1) { + if (!$(this).prop("selected")) { + $(this).prop("disabled", true); } } else { - $(this).removeClass("disabled"); + $(this).prop("disabled", false); } }); + $("#include_tag option:selected").each(function () { + $("#exclude_tag").find("[value=" + $(this).val() + "]").prop("disabled", true); + }); + $("#include_tag").selectpicker("refresh"); + $("#exclude_tag").selectpicker("refresh"); }); }); diff --git a/cps/static/js/filter_grid.js b/cps/static/js/filter_grid.js index 457b9055..d57d155f 100644 --- a/cps/static/js/filter_grid.js +++ b/cps/static/js/filter_grid.js @@ -24,31 +24,47 @@ var $list = $("#list").isotope({ }); $("#desc").click(function() { + var page = $(this).data("id"); + $.ajax({ + method:"post", + contentType: "application/json; charset=utf-8", + dataType: "json", + url: window.location.pathname + "/../../ajax/view", + data: "{\"" + page + "\": {\"dir\": \"desc\"}}", + }); $list.isotope({ sortBy: "name", sortAscending: true }); - return; }); $("#asc").click(function() { + var page = $(this).data("id"); + $.ajax({ + method:"post", + contentType: "application/json; charset=utf-8", + dataType: "json", + url: window.location.pathname + "/../../ajax/view", + data: "{\"" + page + "\": {\"dir\": \"asc\"}}", + }); $list.isotope({ sortBy: "name", sortAscending: false }); - return; }); $("#all").click(function() { // go through all elements and make them visible $list.isotope({ filter: function() { return true; - } }) + } + }); }); $(".char").click(function() { var character = this.innerText; $list.isotope({ filter: function() { - return this.attributes["data-id"].value.charAt(0).toUpperCase() == character; - } }) + return this.attributes["data-id"].value.charAt(0).toUpperCase() === character; + } + }); }); diff --git a/cps/static/js/filter_list.js b/cps/static/js/filter_list.js index 8291f0ac..ef7639fa 100644 --- a/cps/static/js/filter_list.js +++ b/cps/static/js/filter_list.js @@ -19,6 +19,10 @@ var direction = 0; // Descending order var sort = 0; // Show sorted entries $("#sort_name").click(function() { + var className = $("h1").attr("Class") + "_sort_name"; + var obj = {}; + obj[className] = sort; + var count = 0; var index = 0; var store; @@ -40,9 +44,7 @@ $("#sort_name").click(function() { count++; } }); - /*listItems.sort(function(a,b){ - return $(a).children()[1].innerText.localeCompare($(b).children()[1].innerText) - });*/ + // Find count of middle element if (count > 20) { var middle = parseInt(count / 2, 10) + (count % 2); @@ -66,6 +68,14 @@ $("#desc").click(function() { if (direction === 0) { return; } + var page = $(this).data("id"); + $.ajax({ + method:"post", + contentType: "application/json; charset=utf-8", + dataType: "json", + url: window.location.pathname + "/../../ajax/view", + data: "{\"" + page + "\": {\"dir\": \"desc\"}}", + }); var index = 0; var list = $("#list"); var second = $("#second"); @@ -78,7 +88,7 @@ $("#desc").click(function() { // Find count of middle element var count = $(".row:visible").length; if (count > 20) { - middle = parseInt(count / 2) + (count % 2); + middle = parseInt(count / 2, 10) + (count % 2); //var middle = parseInt(count / 2) + (count % 2); // search for the middle of all visible elements @@ -102,9 +112,18 @@ $("#desc").click(function() { $("#asc").click(function() { + if (direction === 1) { return; } + var page = $(this).data("id"); + $.ajax({ + method:"post", + contentType: "application/json; charset=utf-8", + dataType: "json", + url: window.location.pathname + "/../../ajax/view", + data: "{\"" + page + "\": {\"dir\": \"asc\"}}", + }); var index = 0; var list = $("#list"); var second = $("#second"); @@ -116,7 +135,7 @@ $("#asc").click(function() { // Find count of middle element var count = $(".row:visible").length; if (count > 20) { - var middle = parseInt(count / 2) + (count % 2); + var middle = parseInt(count / 2, 10) + (count % 2); //var middle = parseInt(count / 2) + (count % 2); // search for the middle of all visible elements @@ -131,7 +150,6 @@ $("#asc").click(function() { }); // middle = parseInt(elementLength / 2) + (elementLength % 2); - list.append(reversed.slice(0, index)); second.append(reversed.slice(index, elementLength)); } else { diff --git a/cps/static/js/get_meta.js b/cps/static/js/get_meta.js index e27608ab..04c1d270 100644 --- a/cps/static/js/get_meta.js +++ b/cps/static/js/get_meta.js @@ -108,7 +108,7 @@ $(function () { tags: result.volumeInfo.categories || [], rating: result.volumeInfo.averageRating || 0, cover: result.volumeInfo.imageLinks ? - result.volumeInfo.imageLinks.thumbnail : "/static/generic_cover.jpg", + result.volumeInfo.imageLinks.thumbnail : location + "/../../../static/generic_cover.jpg", url: "https://books.google.com/books?id=" + result.id, source: { id: "google", @@ -138,8 +138,8 @@ $(function () { seriesTitle = result.series.title; } var dateFomers = result.pubdate.split("-"); - var publishedYear = parseInt(dateFomers[0]); - var publishedMonth = parseInt(dateFomers[1]); + var publishedYear = parseInt(dateFomers[0], 10); + var publishedMonth = parseInt(dateFomers[1], 10); var publishedDate = new Date(publishedYear, publishedMonth - 1, 1); publishedDate = formatDate(publishedDate); @@ -194,8 +194,8 @@ $(function () { } else { dateFomers = result.date_added.split("-"); } - var publishedYear = parseInt(dateFomers[0]); - var publishedMonth = parseInt(dateFomers[1]); + var publishedYear = parseInt(dateFomers[0], 10); + var publishedMonth = parseInt(dateFomers[1], 10); var publishedDate = new Date(publishedYear, publishedMonth - 1, 1); publishedDate = formatDate(publishedDate); @@ -253,7 +253,7 @@ $(function () { } function dbSearchBook (title) { - var apikey = "0df993c66c0c636e29ecbb5344252a4a"; + var apikey = "054022eaeae0b00e0fc068c0c0a2102a"; $.ajax({ url: douban + dbSearch + "?apikey=" + apikey + "&q=" + title + "&fields=all&count=10", type: "GET", diff --git a/cps/static/js/kthoom.js b/cps/static/js/kthoom.js index 33a2ac0e..56038fc6 100644 --- a/cps/static/js/kthoom.js +++ b/cps/static/js/kthoom.js @@ -141,9 +141,27 @@ var createURLFromArray = function(array, mimeType) { kthoom.ImageFile = function(file) { this.filename = file.filename; var fileExtension = file.filename.split(".").pop().toLowerCase(); - this.mimeType = fileExtension === "png" ? "image/png" : - (fileExtension === "jpg" || fileExtension === "jpeg") ? "image/jpeg" : - fileExtension === "gif" ? "image/gif" : fileExtension === "svg" ? "image/xml+svg" : undefined; + switch (fileExtension) { + case "jpg": + case "jpeg": + this.mimeType = "image/jpeg"; + break; + case "png": + this.mimeType = "image/png"; + break; + case "gif": + this.mimeType = "image/gif"; + break; + case "svg": + this.mimeType = "image/svg+xml"; + break; + case "webp": + this.mimeType = "image/webp"; + break; + default: + this.mimeType = undefined; + break; + } if ( this.mimeType !== undefined) { this.dataURI = createURLFromArray(file.fileData, this.mimeType); this.data = file; @@ -331,7 +349,7 @@ function setImage(url) { $("#mainText").innerHTML(""); }; xhr.send(null); - } else if (!/(jpg|jpeg|png|gif)$/.test(imageFiles[currentImage].filename) && imageFiles[currentImage].data.uncompressedSize < 10 * 1024) { + } else if (!/(jpg|jpeg|png|gif|webp)$/.test(imageFiles[currentImage].filename) && imageFiles[currentImage].data.uncompressedSize < 10 * 1024) { xhr.open("GET", url, true); xhr.onload = function() { $("#mainText").css("display", ""); @@ -635,6 +653,14 @@ function init(filename) { // Focus the scrollable area so that keyboard scrolling work as expected $("#mainContent").focus(); + $("#mainContent").swipe( { + swipeRight:function() { + showLeftPage(); + }, + swipeLeft:function() { + showRightPage(); + }, + }); $("#mainImage").click(function(evt) { // Firefox does not support offsetX/Y so we have to manually calculate // where the user clicked in the image. diff --git a/cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.el.min.js b/cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.el.min.js new file mode 100644 index 00000000..046e9eb5 --- /dev/null +++ b/cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.el.min.js @@ -0,0 +1 @@ +!function(a){a.fn.datepicker.dates.el={days:["Κυριακή","Δευτέρα","Τρίτη","Τετάρτη","Πέμπτη","Παρασκευή","Σάββατο"],daysShort:["Κυρ","Δευ","Τρι","Τετ","Πεμ","Παρ","Σαβ"],daysMin:["Κυ","Δε","Τρ","Τε","Πε","Πα","Σα"],months:["Ιανουάριος","Φεβρουάριος","Μάρτιος","Απρίλιος","Μάιος","Ιούνιος","Ιούλιος","Αύγουστος","Σεπτέμβριος","Οκτώβριος","Νοέμβριος","Δεκέμβριος"],monthsShort:["Ιαν","Φεβ","Μαρ","Απρ","Μάι","Ιουν","Ιουλ","Αυγ","Σεπ","Οκτ","Νοε","Δεκ"],today:"Σήμερα",clear:"Καθαρισμός",weekStart:1,format:"d/m/yyyy"}}(jQuery); \ No newline at end of file diff --git a/cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.tr.min.js b/cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.tr.min.js new file mode 100644 index 00000000..7889b113 --- /dev/null +++ b/cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.tr.min.js @@ -0,0 +1 @@ +!function(a){a.fn.datepicker.dates.tr={days:["Pazar","Pazartesi","Salı","Çarşamba","Perşembe","Cuma","Cumartesi"],daysShort:["Pz","Pzt","Sal","Çrş","Prş","Cu","Cts"],daysMin:["Pz","Pzt","Sa","Çr","Pr","Cu","Ct"],months:["Ocak","Şubat","Mart","Nisan","Mayıs","Haziran","Temmuz","Ağustos","Eylül","Ekim","Kasım","Aralık"],monthsShort:["Oca","Şub","Mar","Nis","May","Haz","Tem","Ağu","Eyl","Eki","Kas","Ara"],today:"Bugün",clear:"Temizle",weekStart:1,format:"dd.mm.yyyy"}}(jQuery); \ No newline at end of file diff --git a/cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.zh-CN.min.js b/cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.zh_Hans_CN.min.js similarity index 100% rename from cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.zh-CN.min.js rename to cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.zh_Hans_CN.min.js diff --git a/cps/static/js/libs/bootstrap-select.min.js b/cps/static/js/libs/bootstrap-select.min.js new file mode 100644 index 00000000..92e3a32e --- /dev/null +++ b/cps/static/js/libs/bootstrap-select.min.js @@ -0,0 +1,9 @@ +/*! + * Bootstrap-select v1.13.14 (https://developer.snapappointments.com/bootstrap-select) + * + * Copyright 2012-2020 SnapAppointments, LLC + * Licensed under MIT (https://github.com/snapappointments/bootstrap-select/blob/master/LICENSE) + */ + +!function(e,t){void 0===e&&void 0!==window&&(e=window),"function"==typeof define&&define.amd?define(["jquery"],function(e){return t(e)}):"object"==typeof module&&module.exports?module.exports=t(require("jquery")):t(e.jQuery)}(this,function(e){!function(z){"use strict";var d=["sanitize","whiteList","sanitizeFn"],r=["background","cite","href","itemtype","longdesc","poster","src","xlink:href"],e={"*":["class","dir","id","lang","role","tabindex","style",/^aria-[\w-]*$/i],a:["target","href","title","rel"],area:[],b:[],br:[],col:[],code:[],div:[],em:[],hr:[],h1:[],h2:[],h3:[],h4:[],h5:[],h6:[],i:[],img:["src","alt","title","width","height"],li:[],ol:[],p:[],pre:[],s:[],small:[],span:[],sub:[],sup:[],strong:[],u:[],ul:[]},l=/^(?:(?:https?|mailto|ftp|tel|file):|[^&:/?#]*(?:[/?#]|$))/gi,a=/^data:(?:image\/(?:bmp|gif|jpeg|jpg|png|tiff|webp)|video\/(?:mpeg|mp4|ogg|webm)|audio\/(?:mp3|oga|ogg|opus));base64,[a-z0-9+/]+=*$/i;function v(e,t){var i=e.nodeName.toLowerCase();if(-1!==z.inArray(i,t))return-1===z.inArray(i,r)||Boolean(e.nodeValue.match(l)||e.nodeValue.match(a));for(var s=z(t).filter(function(e,t){return t instanceof RegExp}),n=0,o=s.length;n]+>/g,"")),s&&(a=w(a)),a=a.toUpperCase(),o="contains"===i?0<=a.indexOf(t):a.startsWith(t)))break}return o}function L(e){return parseInt(e,10)||0}z.fn.triggerNative=function(e){var t,i=this[0];i.dispatchEvent?(u?t=new Event(e,{bubbles:!0}):(t=document.createEvent("Event")).initEvent(e,!0,!1),i.dispatchEvent(t)):i.fireEvent?((t=document.createEventObject()).eventType=e,i.fireEvent("on"+e,t)):this.trigger(e)};var f={"\xc0":"A","\xc1":"A","\xc2":"A","\xc3":"A","\xc4":"A","\xc5":"A","\xe0":"a","\xe1":"a","\xe2":"a","\xe3":"a","\xe4":"a","\xe5":"a","\xc7":"C","\xe7":"c","\xd0":"D","\xf0":"d","\xc8":"E","\xc9":"E","\xca":"E","\xcb":"E","\xe8":"e","\xe9":"e","\xea":"e","\xeb":"e","\xcc":"I","\xcd":"I","\xce":"I","\xcf":"I","\xec":"i","\xed":"i","\xee":"i","\xef":"i","\xd1":"N","\xf1":"n","\xd2":"O","\xd3":"O","\xd4":"O","\xd5":"O","\xd6":"O","\xd8":"O","\xf2":"o","\xf3":"o","\xf4":"o","\xf5":"o","\xf6":"o","\xf8":"o","\xd9":"U","\xda":"U","\xdb":"U","\xdc":"U","\xf9":"u","\xfa":"u","\xfb":"u","\xfc":"u","\xdd":"Y","\xfd":"y","\xff":"y","\xc6":"Ae","\xe6":"ae","\xde":"Th","\xfe":"th","\xdf":"ss","\u0100":"A","\u0102":"A","\u0104":"A","\u0101":"a","\u0103":"a","\u0105":"a","\u0106":"C","\u0108":"C","\u010a":"C","\u010c":"C","\u0107":"c","\u0109":"c","\u010b":"c","\u010d":"c","\u010e":"D","\u0110":"D","\u010f":"d","\u0111":"d","\u0112":"E","\u0114":"E","\u0116":"E","\u0118":"E","\u011a":"E","\u0113":"e","\u0115":"e","\u0117":"e","\u0119":"e","\u011b":"e","\u011c":"G","\u011e":"G","\u0120":"G","\u0122":"G","\u011d":"g","\u011f":"g","\u0121":"g","\u0123":"g","\u0124":"H","\u0126":"H","\u0125":"h","\u0127":"h","\u0128":"I","\u012a":"I","\u012c":"I","\u012e":"I","\u0130":"I","\u0129":"i","\u012b":"i","\u012d":"i","\u012f":"i","\u0131":"i","\u0134":"J","\u0135":"j","\u0136":"K","\u0137":"k","\u0138":"k","\u0139":"L","\u013b":"L","\u013d":"L","\u013f":"L","\u0141":"L","\u013a":"l","\u013c":"l","\u013e":"l","\u0140":"l","\u0142":"l","\u0143":"N","\u0145":"N","\u0147":"N","\u014a":"N","\u0144":"n","\u0146":"n","\u0148":"n","\u014b":"n","\u014c":"O","\u014e":"O","\u0150":"O","\u014d":"o","\u014f":"o","\u0151":"o","\u0154":"R","\u0156":"R","\u0158":"R","\u0155":"r","\u0157":"r","\u0159":"r","\u015a":"S","\u015c":"S","\u015e":"S","\u0160":"S","\u015b":"s","\u015d":"s","\u015f":"s","\u0161":"s","\u0162":"T","\u0164":"T","\u0166":"T","\u0163":"t","\u0165":"t","\u0167":"t","\u0168":"U","\u016a":"U","\u016c":"U","\u016e":"U","\u0170":"U","\u0172":"U","\u0169":"u","\u016b":"u","\u016d":"u","\u016f":"u","\u0171":"u","\u0173":"u","\u0174":"W","\u0175":"w","\u0176":"Y","\u0177":"y","\u0178":"Y","\u0179":"Z","\u017b":"Z","\u017d":"Z","\u017a":"z","\u017c":"z","\u017e":"z","\u0132":"IJ","\u0133":"ij","\u0152":"Oe","\u0153":"oe","\u0149":"'n","\u017f":"s"},m=/[\xc0-\xd6\xd8-\xf6\xf8-\xff\u0100-\u017f]/g,g=RegExp("[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff\\u1ab0-\\u1aff\\u1dc0-\\u1dff]","g");function b(e){return f[e]}function w(e){return(e=e.toString())&&e.replace(m,b).replace(g,"")}var I,x,y,$,S=(I={"&":"&","<":"<",">":">",'"':""","'":"'","`":"`"},x="(?:"+Object.keys(I).join("|")+")",y=RegExp(x),$=RegExp(x,"g"),function(e){return e=null==e?"":""+e,y.test(e)?e.replace($,E):e});function E(e){return I[e]}var C={32:" ",48:"0",49:"1",50:"2",51:"3",52:"4",53:"5",54:"6",55:"7",56:"8",57:"9",59:";",65:"A",66:"B",67:"C",68:"D",69:"E",70:"F",71:"G",72:"H",73:"I",74:"J",75:"K",76:"L",77:"M",78:"N",79:"O",80:"P",81:"Q",82:"R",83:"S",84:"T",85:"U",86:"V",87:"W",88:"X",89:"Y",90:"Z",96:"0",97:"1",98:"2",99:"3",100:"4",101:"5",102:"6",103:"7",104:"8",105:"9"},N=27,D=13,H=32,W=9,B=38,M=40,R={success:!1,major:"3"};try{R.full=(z.fn.dropdown.Constructor.VERSION||"").split(" ")[0].split("."),R.major=R.full[0],R.success=!0}catch(e){}var U=0,j=".bs.select",V={DISABLED:"disabled",DIVIDER:"divider",SHOW:"open",DROPUP:"dropup",MENU:"dropdown-menu",MENURIGHT:"dropdown-menu-right",MENULEFT:"dropdown-menu-left",BUTTONCLASS:"btn-default",POPOVERHEADER:"popover-title",ICONBASE:"glyphicon",TICKICON:"glyphicon-ok"},F={MENU:"."+V.MENU},_={span:document.createElement("span"),i:document.createElement("i"),subtext:document.createElement("small"),a:document.createElement("a"),li:document.createElement("li"),whitespace:document.createTextNode("\xa0"),fragment:document.createDocumentFragment()};_.a.setAttribute("role","option"),"4"===R.major&&(_.a.className="dropdown-item"),_.subtext.className="text-muted",_.text=_.span.cloneNode(!1),_.text.className="text",_.checkMark=_.span.cloneNode(!1);var G=new RegExp(B+"|"+M),q=new RegExp("^"+W+"$|"+N),K={li:function(e,t,i){var s=_.li.cloneNode(!1);return e&&(1===e.nodeType||11===e.nodeType?s.appendChild(e):s.innerHTML=e),void 0!==t&&""!==t&&(s.className=t),null!=i&&s.classList.add("optgroup-"+i),s},a:function(e,t,i){var s=_.a.cloneNode(!0);return e&&(11===e.nodeType?s.appendChild(e):s.insertAdjacentHTML("beforeend",e)),void 0!==t&&""!==t&&s.classList.add.apply(s.classList,t.split(" ")),i&&s.setAttribute("style",i),s},text:function(e,t){var i,s,n=_.text.cloneNode(!1);if(e.content)n.innerHTML=e.content;else{if(n.textContent=e.text,e.icon){var o=_.whitespace.cloneNode(!1);(s=(!0===t?_.i:_.span).cloneNode(!1)).className=this.options.iconBase+" "+e.icon,_.fragment.appendChild(s),_.fragment.appendChild(o)}e.subtext&&((i=_.subtext.cloneNode(!1)).textContent=e.subtext,n.appendChild(i))}if(!0===t)for(;0'},maxOptions:!1,mobile:!1,selectOnTab:!1,dropdownAlignRight:!1,windowPadding:0,virtualScroll:600,display:!1,sanitize:!0,sanitizeFn:null,whiteList:e},Y.prototype={constructor:Y,init:function(){var i=this,e=this.$element.attr("id");U++,this.selectId="bs-select-"+U,this.$element[0].classList.add("bs-select-hidden"),this.multiple=this.$element.prop("multiple"),this.autofocus=this.$element.prop("autofocus"),this.$element[0].classList.contains("show-tick")&&(this.options.showTick=!0),this.$newElement=this.createDropdown(),this.buildData(),this.$element.after(this.$newElement).prependTo(this.$newElement),this.$button=this.$newElement.children("button"),this.$menu=this.$newElement.children(F.MENU),this.$menuInner=this.$menu.children(".inner"),this.$searchbox=this.$menu.find("input"),this.$element[0].classList.remove("bs-select-hidden"),!0===this.options.dropdownAlignRight&&this.$menu[0].classList.add(V.MENURIGHT),void 0!==e&&this.$button.attr("data-id",e),this.checkDisabled(),this.clickListener(),this.options.liveSearch?(this.liveSearchListener(),this.focusedParent=this.$searchbox[0]):this.focusedParent=this.$menuInner[0],this.setStyle(),this.render(),this.setWidth(),this.options.container?this.selectPosition():this.$element.on("hide"+j,function(){if(i.isVirtual()){var e=i.$menuInner[0],t=e.firstChild.cloneNode(!1);e.replaceChild(t,e.firstChild),e.scrollTop=0}}),this.$menu.data("this",this),this.$newElement.data("this",this),this.options.mobile&&this.mobile(),this.$newElement.on({"hide.bs.dropdown":function(e){i.$element.trigger("hide"+j,e)},"hidden.bs.dropdown":function(e){i.$element.trigger("hidden"+j,e)},"show.bs.dropdown":function(e){i.$element.trigger("show"+j,e)},"shown.bs.dropdown":function(e){i.$element.trigger("shown"+j,e)}}),i.$element[0].hasAttribute("required")&&this.$element.on("invalid"+j,function(){i.$button[0].classList.add("bs-invalid"),i.$element.on("shown"+j+".invalid",function(){i.$element.val(i.$element.val()).off("shown"+j+".invalid")}).on("rendered"+j,function(){this.validity.valid&&i.$button[0].classList.remove("bs-invalid"),i.$element.off("rendered"+j)}),i.$button.on("blur"+j,function(){i.$element.trigger("focus").trigger("blur"),i.$button.off("blur"+j)})}),setTimeout(function(){i.buildList(),i.$element.trigger("loaded"+j)})},createDropdown:function(){var e=this.multiple||this.options.showTick?" show-tick":"",t=this.multiple?' aria-multiselectable="true"':"",i="",s=this.autofocus?" autofocus":"";R.major<4&&this.$element.parent().hasClass("input-group")&&(i=" input-group-btn");var n,o="",r="",l="",a="";return this.options.header&&(o='
'+this.options.header+"
"),this.options.liveSearch&&(r=''),this.multiple&&this.options.actionsBox&&(l='
"),this.multiple&&this.options.doneButton&&(a='
"),n='",z(n)},setPositionData:function(){this.selectpicker.view.canHighlight=[];for(var e=this.selectpicker.view.size=0;e=this.options.virtualScroll||!0===this.options.virtualScroll},createView:function(A,e,t){var L,N,D=this,i=0,H=[];if(this.selectpicker.isSearching=A,this.selectpicker.current=A?this.selectpicker.search:this.selectpicker.main,this.setPositionData(),e)if(t)i=this.$menuInner[0].scrollTop;else if(!D.multiple){var s=D.$element[0],n=(s.options[s.selectedIndex]||{}).liIndex;if("number"==typeof n&&!1!==D.options.size){var o=D.selectpicker.main.data[n],r=o&&o.position;r&&(i=r-(D.sizeInfo.menuInnerHeight+D.sizeInfo.liHeight)/2)}}function l(e,t){var i,s,n,o,r,l,a,c,d=D.selectpicker.current.elements.length,h=[],p=!0,u=D.isVirtual();D.selectpicker.view.scrollTop=e,i=Math.ceil(D.sizeInfo.menuInnerHeight/D.sizeInfo.liHeight*1.5),s=Math.round(d/i)||1;for(var f=0;fd-1?0:D.selectpicker.current.data[d-1].position-D.selectpicker.current.data[D.selectpicker.view.position1-1].position,b.firstChild.style.marginTop=v+"px",b.firstChild.style.marginBottom=g+"px"):(b.firstChild.style.marginTop=0,b.firstChild.style.marginBottom=0),b.firstChild.appendChild(w),!0===u&&D.sizeInfo.hasScrollBar){var C=b.firstChild.offsetWidth;if(t&&CD.sizeInfo.selectWidth)b.firstChild.style.minWidth=D.sizeInfo.menuInnerInnerWidth+"px";else if(C>D.sizeInfo.menuInnerInnerWidth){D.$menu[0].style.minWidth=0;var O=b.firstChild.offsetWidth;O>D.sizeInfo.menuInnerInnerWidth&&(D.sizeInfo.menuInnerInnerWidth=O,b.firstChild.style.minWidth=D.sizeInfo.menuInnerInnerWidth+"px"),D.$menu[0].style.minWidth=""}}}if(D.prevActiveIndex=D.activeIndex,D.options.liveSearch){if(A&&t){var z,T=0;D.selectpicker.view.canHighlight[T]||(T=1+D.selectpicker.view.canHighlight.slice(1).indexOf(!0)),z=D.selectpicker.view.visibleElements[T],D.defocusItem(D.selectpicker.view.currentActive),D.activeIndex=(D.selectpicker.current.data[T]||{}).index,D.focusItem(z)}}else D.$menuInner.trigger("focus")}l(i,!0),this.$menuInner.off("scroll.createView").on("scroll.createView",function(e,t){D.noScroll||l(this.scrollTop,t),D.noScroll=!1}),z(window).off("resize"+j+"."+this.selectId+".createView").on("resize"+j+"."+this.selectId+".createView",function(){D.$newElement.hasClass(V.SHOW)&&l(D.$menuInner[0].scrollTop)})},focusItem:function(e,t,i){if(e){t=t||this.selectpicker.main.data[this.activeIndex];var s=e.firstChild;s&&(s.setAttribute("aria-setsize",this.selectpicker.view.size),s.setAttribute("aria-posinset",t.posinset),!0!==i&&(this.focusedParent.setAttribute("aria-activedescendant",s.id),e.classList.add("active"),s.classList.add("active")))}},defocusItem:function(e){e&&(e.classList.remove("active"),e.firstChild&&e.firstChild.classList.remove("active"))},setPlaceholder:function(){var e=!1;if(this.options.title&&!this.multiple){this.selectpicker.view.titleOption||(this.selectpicker.view.titleOption=document.createElement("option")),e=!0;var t=this.$element[0],i=!1,s=!this.selectpicker.view.titleOption.parentNode;if(s)this.selectpicker.view.titleOption.className="bs-title-option",this.selectpicker.view.titleOption.value="",i=void 0===z(t.options[t.selectedIndex]).attr("selected")&&void 0===this.$element.data("selected");!s&&0===this.selectpicker.view.titleOption.index||t.insertBefore(this.selectpicker.view.titleOption,t.firstChild),i&&(t.selectedIndex=0)}return e},buildData:function(){var p=':not([hidden]):not([data-hidden="true"])',u=[],f=0,e=this.setPlaceholder()?1:0;this.options.hideDisabled&&(p+=":not(:disabled)");var t=this.$element[0].querySelectorAll("select > *"+p);function m(e){var t=u[u.length-1];t&&"divider"===t.type&&(t.optID||e.optID)||((e=e||{}).type="divider",u.push(e))}function v(e,t){if((t=t||{}).divider="true"===e.getAttribute("data-divider"),t.divider)m({optID:t.optID});else{var i=u.length,s=e.style.cssText,n=s?S(s):"",o=(e.className||"")+(t.optgroupClass||"");t.optID&&(o="opt "+o),t.optionClass=o.trim(),t.inlineStyle=n,t.text=e.textContent,t.content=e.getAttribute("data-content"),t.tokens=e.getAttribute("data-tokens"),t.subtext=e.getAttribute("data-subtext"),t.icon=e.getAttribute("data-icon"),e.liIndex=i,t.display=t.content||t.text,t.type="option",t.index=i,t.option=e,t.selected=!!e.selected,t.disabled=t.disabled||!!e.disabled,u.push(t)}}function i(e,t){var i=t[e],s=t[e-1],n=t[e+1],o=i.querySelectorAll("option"+p);if(o.length){var r,l,a={display:S(i.label),subtext:i.getAttribute("data-subtext"),icon:i.getAttribute("data-icon"),type:"optgroup-label",optgroupClass:" "+(i.className||"")};f++,s&&m({optID:f}),a.optID=f,u.push(a);for(var c=0,d=o.length;c li")},render:function(){var e,t=this,i=this.$element[0],s=this.setPlaceholder()&&0===i.selectedIndex,n=O(i,this.options.hideDisabled),o=n.length,r=this.$button[0],l=r.querySelector(".filter-option-inner-inner"),a=document.createTextNode(this.options.multipleSeparator),c=_.fragment.cloneNode(!1),d=!1;if(r.classList.toggle("bs-placeholder",t.multiple?!o:!T(i,n)),this.tabIndex(),"static"===this.options.selectedTextFormat)c=K.text.call(this,{text:this.options.title},!0);else if(!1===(this.multiple&&-1!==this.options.selectedTextFormat.indexOf("count")&&1")).length&&o>e[1]||1===e.length&&2<=o))){if(!s){for(var h=0;h option"+m+", optgroup"+m+" option"+m).length,g="function"==typeof this.options.countSelectedText?this.options.countSelectedText(o,v):this.options.countSelectedText;c=K.text.call(this,{text:g.replace("{0}",o.toString()).replace("{1}",v.toString())},!0)}if(null==this.options.title&&(this.options.title=this.$element.attr("title")),c.childNodes.length||(c=K.text.call(this,{text:void 0!==this.options.title?this.options.title:this.options.noneSelectedText},!0)),r.title=c.textContent.replace(/<[^>]*>?/g,"").trim(),this.options.sanitize&&d&&P([c],t.options.whiteList,t.options.sanitizeFn),l.innerHTML="",l.appendChild(c),R.major<4&&this.$newElement[0].classList.contains("bs3-has-addon")){var b=r.querySelector(".filter-expand"),w=l.cloneNode(!0);w.className="filter-expand",b?r.replaceChild(w,b):r.appendChild(w)}this.$element.trigger("rendered"+j)},setStyle:function(e,t){var i,s=this.$button[0],n=this.$newElement[0],o=this.options.style.trim();this.$element.attr("class")&&this.$newElement.addClass(this.$element.attr("class").replace(/selectpicker|mobile-device|bs-select-hidden|validate\[.*\]/gi,"")),R.major<4&&(n.classList.add("bs3"),n.parentNode.classList.contains("input-group")&&(n.previousElementSibling||n.nextElementSibling)&&(n.previousElementSibling||n.nextElementSibling).classList.contains("input-group-addon")&&n.classList.add("bs3-has-addon")),i=e?e.trim():o,"add"==t?i&&s.classList.add.apply(s.classList,i.split(" ")):"remove"==t?i&&s.classList.remove.apply(s.classList,i.split(" ")):(o&&s.classList.remove.apply(s.classList,o.split(" ")),i&&s.classList.add.apply(s.classList,i.split(" ")))},liHeight:function(e){if(e||!1!==this.options.size&&!Object.keys(this.sizeInfo).length){var t=document.createElement("div"),i=document.createElement("div"),s=document.createElement("div"),n=document.createElement("ul"),o=document.createElement("li"),r=document.createElement("li"),l=document.createElement("li"),a=document.createElement("a"),c=document.createElement("span"),d=this.options.header&&0this.sizeInfo.menuExtras.vert&&l+this.sizeInfo.menuExtras.vert+50>this.sizeInfo.selectOffsetBot,!0===this.selectpicker.isSearching&&(a=this.selectpicker.dropup),this.$newElement.toggleClass(V.DROPUP,a),this.selectpicker.dropup=a),"auto"===this.options.size)n=3this.options.size){for(var b=0;bthis.sizeInfo.menuInnerHeight&&(this.sizeInfo.hasScrollBar=!0,this.sizeInfo.totalMenuWidth=this.sizeInfo.menuWidth+this.sizeInfo.scrollBarWidth),"auto"===this.options.dropdownAlignRight&&this.$menu.toggleClass(V.MENURIGHT,this.sizeInfo.selectOffsetLeft>this.sizeInfo.selectOffsetRight&&this.sizeInfo.selectOffsetRightthis.options.size&&i.off("resize"+j+"."+this.selectId+".setMenuSize scroll"+j+"."+this.selectId+".setMenuSize")}this.createView(!1,!0,e)},setWidth:function(){var i=this;"auto"===this.options.width?requestAnimationFrame(function(){i.$menu.css("min-width","0"),i.$element.on("loaded"+j,function(){i.liHeight(),i.setMenuSize();var e=i.$newElement.clone().appendTo("body"),t=e.css("width","auto").children("button").outerWidth();e.remove(),i.sizeInfo.selectWidth=Math.max(i.sizeInfo.totalMenuWidth,t),i.$newElement.css("width",i.sizeInfo.selectWidth+"px")})}):"fit"===this.options.width?(this.$menu.css("min-width",""),this.$newElement.css("width","").addClass("fit-width")):this.options.width?(this.$menu.css("min-width",""),this.$newElement.css("width",this.options.width)):(this.$menu.css("min-width",""),this.$newElement.css("width","")),this.$newElement.hasClass("fit-width")&&"fit"!==this.options.width&&this.$newElement[0].classList.remove("fit-width")},selectPosition:function(){this.$bsContainer=z('
');function e(e){var t={},i=r.options.display||!!z.fn.dropdown.Constructor.Default&&z.fn.dropdown.Constructor.Default.display;r.$bsContainer.addClass(e.attr("class").replace(/form-control|fit-width/gi,"")).toggleClass(V.DROPUP,e.hasClass(V.DROPUP)),s=e.offset(),l.is("body")?n={top:0,left:0}:((n=l.offset()).top+=parseInt(l.css("borderTopWidth"))-l.scrollTop(),n.left+=parseInt(l.css("borderLeftWidth"))-l.scrollLeft()),o=e.hasClass(V.DROPUP)?0:e[0].offsetHeight,(R.major<4||"static"===i)&&(t.top=s.top-n.top+o,t.left=s.left-n.left),t.width=e[0].offsetWidth,r.$bsContainer.css(t)}var s,n,o,r=this,l=z(this.options.container);this.$button.on("click.bs.dropdown.data-api",function(){r.isDisabled()||(e(r.$newElement),r.$bsContainer.appendTo(r.options.container).toggleClass(V.SHOW,!r.$button.hasClass(V.SHOW)).append(r.$menu))}),z(window).off("resize"+j+"."+this.selectId+" scroll"+j+"."+this.selectId).on("resize"+j+"."+this.selectId+" scroll"+j+"."+this.selectId,function(){r.$newElement.hasClass(V.SHOW)&&e(r.$newElement)}),this.$element.on("hide"+j,function(){r.$menu.data("height",r.$menu.height()),r.$bsContainer.detach()})},setOptionStatus:function(e){var t=this;if(t.noScroll=!1,t.selectpicker.view.visibleElements&&t.selectpicker.view.visibleElements.length)for(var i=0;i
');y[2]&&($=$.replace("{var}",y[2][1"+$+"")),d=!1,C.$element.trigger("maxReached"+j)),g&&w&&(E.append(z("
"+S+"
")),d=!1,C.$element.trigger("maxReachedGrp"+j)),setTimeout(function(){C.setSelected(r,!1)},10),E[0].classList.add("fadeOut"),setTimeout(function(){E.remove()},1050)}}}else c&&(c.selected=!1),h.selected=!0,C.setSelected(r,!0);!C.multiple||C.multiple&&1===C.options.maxOptions?C.$button.trigger("focus"):C.options.liveSearch&&C.$searchbox.trigger("focus"),d&&(!C.multiple&&a===s.selectedIndex||(A=[h.index,p.prop("selected"),l],C.$element.triggerNative("change")))}}),this.$menu.on("click","li."+V.DISABLED+" a, ."+V.POPOVERHEADER+", ."+V.POPOVERHEADER+" :not(.close)",function(e){e.currentTarget==this&&(e.preventDefault(),e.stopPropagation(),C.options.liveSearch&&!z(e.target).hasClass("close")?C.$searchbox.trigger("focus"):C.$button.trigger("focus"))}),this.$menuInner.on("click",".divider, .dropdown-header",function(e){e.preventDefault(),e.stopPropagation(),C.options.liveSearch?C.$searchbox.trigger("focus"):C.$button.trigger("focus")}),this.$menu.on("click","."+V.POPOVERHEADER+" .close",function(){C.$button.trigger("click")}),this.$searchbox.on("click",function(e){e.stopPropagation()}),this.$menu.on("click",".actions-btn",function(e){C.options.liveSearch?C.$searchbox.trigger("focus"):C.$button.trigger("focus"),e.preventDefault(),e.stopPropagation(),z(this).hasClass("bs-select-all")?C.selectAll():C.deselectAll()}),this.$element.on("change"+j,function(){C.render(),C.$element.trigger("changed"+j,A),A=null}).on("focus"+j,function(){C.options.mobile||C.$button.trigger("focus")})},liveSearchListener:function(){var u=this,f=document.createElement("li");this.$button.on("click.bs.dropdown.data-api",function(){u.$searchbox.val()&&u.$searchbox.val("")}),this.$searchbox.on("click.bs.dropdown.data-api focus.bs.dropdown.data-api touchend.bs.dropdown.data-api",function(e){e.stopPropagation()}),this.$searchbox.on("input propertychange",function(){var e=u.$searchbox.val();if(u.selectpicker.search.elements=[],u.selectpicker.search.data=[],e){var t=[],i=e.toUpperCase(),s={},n=[],o=u._searchStyle(),r=u.options.liveSearchNormalize;r&&(i=w(i));for(var l=0;l=a.selectpicker.view.canHighlight.length&&(t=0),a.selectpicker.view.canHighlight[t+f]||(t=t+1+a.selectpicker.view.canHighlight.slice(t+f+1).indexOf(!0))),e.preventDefault();var m=f+t;e.which===B?0===f&&t===c.length-1?(a.$menuInner[0].scrollTop=a.$menuInner[0].scrollHeight,m=a.selectpicker.current.elements.length-1):d=(o=(n=a.selectpicker.current.data[m]).position-n.height)u+a.sizeInfo.menuInnerHeight),s=a.selectpicker.main.elements[v],a.activeIndex=b[x],a.focusItem(s),s&&s.firstChild.focus(),d&&(a.$menuInner[0].scrollTop=o),r.trigger("focus")}}i&&(e.which===H&&!a.selectpicker.keydown.keyHistory||e.which===D||e.which===W&&a.options.selectOnTab)&&(e.which!==H&&e.preventDefault(),a.options.liveSearch&&e.which===H||(a.$menuInner.find(".active a").trigger("click",!0),r.trigger("focus"),a.options.liveSearch||(e.preventDefault(),z(document).data("spaceSelect",!0))))}},mobile:function(){this.$element[0].classList.add("mobile-device")},refresh:function(){var e=z.extend({},this.options,this.$element.data());this.options=e,this.checkDisabled(),this.setStyle(),this.render(),this.buildData(),this.buildList(),this.setWidth(),this.setSize(!0),this.$element.trigger("refreshed"+j)},hide:function(){this.$newElement.hide()},show:function(){this.$newElement.show()},remove:function(){this.$newElement.remove(),this.$element.remove()},destroy:function(){this.$newElement.before(this.$element).remove(),this.$bsContainer?this.$bsContainer.remove():this.$menu.remove(),this.$element.off(j).removeData("selectpicker").removeClass("bs-select-hidden selectpicker"),z(window).off(j+"."+this.selectId)}};var J=z.fn.selectpicker;z.fn.selectpicker=Z,z.fn.selectpicker.Constructor=Y,z.fn.selectpicker.noConflict=function(){return z.fn.selectpicker=J,this};var Q=z.fn.dropdown.Constructor._dataApiKeydownHandler||z.fn.dropdown.Constructor.prototype.keydown;z(document).off("keydown.bs.dropdown.data-api").on("keydown.bs.dropdown.data-api",':not(.bootstrap-select) > [data-toggle="dropdown"]',Q).on("keydown.bs.dropdown.data-api",":not(.bootstrap-select) > .dropdown-menu",Q).on("keydown"+j,'.bootstrap-select [data-toggle="dropdown"], .bootstrap-select [role="listbox"], .bootstrap-select .bs-searchbox input',Y.prototype.keydown).on("focusin.modal",'.bootstrap-select [data-toggle="dropdown"], .bootstrap-select [role="listbox"], .bootstrap-select .bs-searchbox input',function(e){e.stopPropagation()}),z(window).on("load"+j+".data-api",function(){z(".selectpicker").each(function(){var e=z(this);Z.call(e,e.data())})})}(e)}); +//# sourceMappingURL=bootstrap-select.min.js.map \ No newline at end of file diff --git a/cps/static/js/libs/bootstrap-select/defaults-cs.min.js b/cps/static/js/libs/bootstrap-select/defaults-cs.min.js new file mode 100644 index 00000000..be309a10 --- /dev/null +++ b/cps/static/js/libs/bootstrap-select/defaults-cs.min.js @@ -0,0 +1,8 @@ +/*! + * Bootstrap-select v1.13.14 (https://developer.snapappointments.com/bootstrap-select) + * + * Copyright 2012-2020 SnapAppointments, LLC + * Licensed under MIT (https://github.com/snapappointments/bootstrap-select/blob/master/LICENSE) + */ + +!function(e,n){void 0===e&&void 0!==window&&(e=window),"function"==typeof define&&define.amd?define(["jquery"],function(e){return n(e)}):"object"==typeof module&&module.exports?module.exports=n(require("jquery")):n(e.jQuery)}(this,function(e){e.fn.selectpicker.defaults={noneSelectedText:"Vyberte ze seznamu",noneResultsText:"Pro hled\xe1n\xed {0} nebyly nalezeny \u017e\xe1dn\xe9 v\xfdsledky",countSelectedText:"Vybran\xe9 {0} z {1}",maxOptionsText:["Limit p\u0159ekro\u010den ({n} {var} max)","Limit skupiny p\u0159ekro\u010den ({n} {var} max)",["polo\u017eek","polo\u017eka"]],multipleSeparator:", ",selectAllText:"Vybrat v\u0161e",deselectAllText:"Zru\u0161it v\xfdb\u011br"}}); \ No newline at end of file diff --git a/cps/static/js/libs/bootstrap-select/defaults-de.min.js b/cps/static/js/libs/bootstrap-select/defaults-de.min.js new file mode 100644 index 00000000..e625440b --- /dev/null +++ b/cps/static/js/libs/bootstrap-select/defaults-de.min.js @@ -0,0 +1,8 @@ +/*! + * Bootstrap-select v1.13.14 (https://developer.snapappointments.com/bootstrap-select) + * + * Copyright 2012-2020 SnapAppointments, LLC + * Licensed under MIT (https://github.com/snapappointments/bootstrap-select/blob/master/LICENSE) + */ + +!function(e,t){void 0===e&&void 0!==window&&(e=window),"function"==typeof define&&define.amd?define(["jquery"],function(e){return t(e)}):"object"==typeof module&&module.exports?module.exports=t(require("jquery")):t(e.jQuery)}(this,function(e){e.fn.selectpicker.defaults={noneSelectedText:"Bitte w\xe4hlen...",noneResultsText:"Keine Ergebnisse f\xfcr {0}",countSelectedText:function(e,t){return 1==e?"{0} Element ausgew\xe4hlt":"{0} Elemente ausgew\xe4hlt"},maxOptionsText:function(e,t){return[1==e?"Limit erreicht ({n} Element max.)":"Limit erreicht ({n} Elemente max.)",1==t?"Gruppen-Limit erreicht ({n} Element max.)":"Gruppen-Limit erreicht ({n} Elemente max.)"]},selectAllText:"Alles ausw\xe4hlen",deselectAllText:"Nichts ausw\xe4hlen",multipleSeparator:", "}}); \ No newline at end of file diff --git a/cps/static/js/libs/bootstrap-select/defaults-es.min.js b/cps/static/js/libs/bootstrap-select/defaults-es.min.js new file mode 100644 index 00000000..25efec39 --- /dev/null +++ b/cps/static/js/libs/bootstrap-select/defaults-es.min.js @@ -0,0 +1,8 @@ +/*! + * Bootstrap-select v1.13.14 (https://developer.snapappointments.com/bootstrap-select) + * + * Copyright 2012-2020 SnapAppointments, LLC + * Licensed under MIT (https://github.com/snapappointments/bootstrap-select/blob/master/LICENSE) + */ + +!function(e,o){void 0===e&&void 0!==window&&(e=window),"function"==typeof define&&define.amd?define(["jquery"],function(e){return o(e)}):"object"==typeof module&&module.exports?module.exports=o(require("jquery")):o(e.jQuery)}(this,function(e){e.fn.selectpicker.defaults={noneSelectedText:"No hay selecci\xf3n",noneResultsText:"No hay resultados {0}",countSelectedText:"Seleccionados {0} de {1}",maxOptionsText:["L\xedmite alcanzado ({n} {var} max)","L\xedmite del grupo alcanzado({n} {var} max)",["elementos","element"]],multipleSeparator:", ",selectAllText:"Seleccionar Todos",deselectAllText:"Desmarcar Todos"}}); \ No newline at end of file diff --git a/cps/static/js/libs/bootstrap-select/defaults-fi.min.js b/cps/static/js/libs/bootstrap-select/defaults-fi.min.js new file mode 100644 index 00000000..bee14048 --- /dev/null +++ b/cps/static/js/libs/bootstrap-select/defaults-fi.min.js @@ -0,0 +1,8 @@ +/*! + * Bootstrap-select v1.13.14 (https://developer.snapappointments.com/bootstrap-select) + * + * Copyright 2012-2020 SnapAppointments, LLC + * Licensed under MIT (https://github.com/snapappointments/bootstrap-select/blob/master/LICENSE) + */ + +!function(e,t){void 0===e&&void 0!==window&&(e=window),"function"==typeof define&&define.amd?define(["jquery"],function(e){return t(e)}):"object"==typeof module&&module.exports?module.exports=t(require("jquery")):t(e.jQuery)}(this,function(e){e.fn.selectpicker.defaults={noneSelectedText:"Ei valintoja",noneResultsText:"Ei hakutuloksia {0}",countSelectedText:function(e,t){return 1==e?"{0} valittu":"{0} valitut"},maxOptionsText:function(e,t){return["Valintojen maksimim\xe4\xe4r\xe4 ({n} saavutettu)","Ryhm\xe4n maksimim\xe4\xe4r\xe4 ({n} saavutettu)"]},selectAllText:"Valitse kaikki",deselectAllText:"Poista kaikki",multipleSeparator:", "}}); \ No newline at end of file diff --git a/cps/static/js/libs/bootstrap-select/defaults-fr.min.js b/cps/static/js/libs/bootstrap-select/defaults-fr.min.js new file mode 100644 index 00000000..d8931590 --- /dev/null +++ b/cps/static/js/libs/bootstrap-select/defaults-fr.min.js @@ -0,0 +1,8 @@ +/*! + * Bootstrap-select v1.13.14 (https://developer.snapappointments.com/bootstrap-select) + * + * Copyright 2012-2020 SnapAppointments, LLC + * Licensed under MIT (https://github.com/snapappointments/bootstrap-select/blob/master/LICENSE) + */ + +!function(e,t){void 0===e&&void 0!==window&&(e=window),"function"==typeof define&&define.amd?define(["jquery"],function(e){return t(e)}):"object"==typeof module&&module.exports?module.exports=t(require("jquery")):t(e.jQuery)}(this,function(e){e.fn.selectpicker.defaults={noneSelectedText:"Aucune s\xe9lection",noneResultsText:"Aucun r\xe9sultat pour {0}",countSelectedText:function(e,t){return 1"].join(""):k}}})},b.prototype.initBody=function(){var b=this;d.apply(this,Array.prototype.slice.apply(arguments)),this.options.editable&&(a.each(this.columns,function(c,d){d.editable&&(b.$body.find('a[data-name="'+d.field+'"]').editable(d.editable).off("save").on("save",function(c,e){var f=b.getData(),g=a(this).parents("tr[data-index]").data("index"),h=f[g],i=h[d.field];a(this).data("value",e.submitValue),h[d.field]=e.submitValue,b.trigger("editable-save",d.field,h,i,a(this)),b.resetFooter()}),b.$body.find('a[data-name="'+d.field+'"]').editable(d.editable).off("shown").on("shown",function(c,e){var f=b.getData(),g=a(this).parents("tr[data-index]").data("index"),h=f[g];b.trigger("editable-shown",d.field,h,a(this),e)}),b.$body.find('a[data-name="'+d.field+'"]').editable(d.editable).off("hidden").on("hidden",function(c,e){var f=b.getData(),g=a(this).parents("tr[data-index]").data("index"),h=f[g];b.trigger("editable-hidden",d.field,h,a(this),e)}))}),this.trigger("editable-init"))}}(jQuery); \ No newline at end of file +/** + * bootstrap-table - An extended table to integration with some of the most widely used CSS frameworks. (Supports Bootstrap, Semantic UI, Bulma, Material Design, Foundation) + * + * @version v1.16.0 + * @homepage https://bootstrap-table.com + * @author wenzhixin (http://wenzhixin.net.cn/) + * @license MIT + */ + +!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?e(require("jquery")):"function"==typeof define&&define.amd?define(["jquery"],e):e((t=t||self).jQuery)}(this,(function(t){"use strict";t=t&&t.hasOwnProperty("default")?t.default:t;var e="undefined"!=typeof globalThis?globalThis:"undefined"!=typeof window?window:"undefined"!=typeof global?global:"undefined"!=typeof self?self:{};function n(t,e){return t(e={exports:{}},e.exports),e.exports}var r=function(t){return t&&t.Math==Math&&t},o=r("object"==typeof globalThis&&globalThis)||r("object"==typeof window&&window)||r("object"==typeof self&&self)||r("object"==typeof e&&e)||Function("return this")(),i=function(t){try{return!!t()}catch(t){return!0}},a=!i((function(){return 7!=Object.defineProperty({},"a",{get:function(){return 7}}).a})),c={}.propertyIsEnumerable,u=Object.getOwnPropertyDescriptor,f={f:u&&!c.call({1:2},1)?function(t){var e=u(this,t);return!!e&&e.enumerable}:c},l=function(t,e){return{enumerable:!(1&t),configurable:!(2&t),writable:!(4&t),value:e}},s={}.toString,d=function(t){return s.call(t).slice(8,-1)},p="".split,h=i((function(){return!Object("z").propertyIsEnumerable(0)}))?function(t){return"String"==d(t)?p.call(t,""):Object(t)}:Object,v=function(t){if(null==t)throw TypeError("Can't call method on "+t);return t},y=function(t){return h(v(t))},g=function(t){return"object"==typeof t?null!==t:"function"==typeof t},b=function(t,e){if(!g(t))return t;var n,r;if(e&&"function"==typeof(n=t.toString)&&!g(r=n.call(t)))return r;if("function"==typeof(n=t.valueOf)&&!g(r=n.call(t)))return r;if(!e&&"function"==typeof(n=t.toString)&&!g(r=n.call(t)))return r;throw TypeError("Can't convert object to primitive value")},m={}.hasOwnProperty,x=function(t,e){return m.call(t,e)},O=o.document,w=g(O)&&g(O.createElement),E=function(t){return w?O.createElement(t):{}},j=!a&&!i((function(){return 7!=Object.defineProperty(E("div"),"a",{get:function(){return 7}}).a})),S=Object.getOwnPropertyDescriptor,T={f:a?S:function(t,e){if(t=y(t),e=b(e,!0),j)try{return S(t,e)}catch(t){}if(x(t,e))return l(!f.f.call(t,e),t[e])}},P=function(t){if(!g(t))throw TypeError(String(t)+" is not an object");return t},A=Object.defineProperty,_={f:a?A:function(t,e,n){if(P(t),e=b(e,!0),P(n),j)try{return A(t,e,n)}catch(t){}if("get"in n||"set"in n)throw TypeError("Accessors not supported");return"value"in n&&(t[e]=n.value),t}},I=a?function(t,e,n){return _.f(t,e,l(1,n))}:function(t,e,n){return t[e]=n,t},R=function(t,e){try{I(o,t,e)}catch(n){o[t]=e}return e},C=o["__core-js_shared__"]||R("__core-js_shared__",{}),k=Function.toString;"function"!=typeof C.inspectSource&&(C.inspectSource=function(t){return k.call(t)});var M,$,F,D=C.inspectSource,N=o.WeakMap,q="function"==typeof N&&/native code/.test(D(N)),B=n((function(t){(t.exports=function(t,e){return C[t]||(C[t]=void 0!==e?e:{})})("versions",[]).push({version:"3.6.0",mode:"global",copyright:"© 2019 Denis Pushkarev (zloirock.ru)"})})),L=0,K=Math.random(),V=function(t){return"Symbol("+String(void 0===t?"":t)+")_"+(++L+K).toString(36)},U=B("keys"),W=function(t){return U[t]||(U[t]=V(t))},z={},Y=o.WeakMap;if(q){var G=new Y,H=G.get,Q=G.has,X=G.set;M=function(t,e){return X.call(G,t,e),e},$=function(t){return H.call(G,t)||{}},F=function(t){return Q.call(G,t)}}else{var Z=W("state");z[Z]=!0,M=function(t,e){return I(t,Z,e),e},$=function(t){return x(t,Z)?t[Z]:{}},F=function(t){return x(t,Z)}}var J,tt,et={set:M,get:$,has:F,enforce:function(t){return F(t)?$(t):M(t,{})},getterFor:function(t){return function(e){var n;if(!g(e)||(n=$(e)).type!==t)throw TypeError("Incompatible receiver, "+t+" required");return n}}},nt=n((function(t){var e=et.get,n=et.enforce,r=String(String).split("String");(t.exports=function(t,e,i,a){var c=!!a&&!!a.unsafe,u=!!a&&!!a.enumerable,f=!!a&&!!a.noTargetGet;"function"==typeof i&&("string"!=typeof e||x(i,"name")||I(i,"name",e),n(i).source=r.join("string"==typeof e?e:"")),t!==o?(c?!f&&t[e]&&(u=!0):delete t[e],u?t[e]=i:I(t,e,i)):u?t[e]=i:R(e,i)})(Function.prototype,"toString",(function(){return"function"==typeof this&&e(this).source||D(this)}))})),rt=o,ot=function(t){return"function"==typeof t?t:void 0},it=function(t,e){return arguments.length<2?ot(rt[t])||ot(o[t]):rt[t]&&rt[t][e]||o[t]&&o[t][e]},at=Math.ceil,ct=Math.floor,ut=function(t){return isNaN(t=+t)?0:(t>0?ct:at)(t)},ft=Math.min,lt=function(t){return t>0?ft(ut(t),9007199254740991):0},st=Math.max,dt=Math.min,pt=function(t){return function(e,n,r){var o,i=y(e),a=lt(i.length),c=function(t,e){var n=ut(t);return n<0?st(n+e,0):dt(n,e)}(r,a);if(t&&n!=n){for(;a>c;)if((o=i[c++])!=o)return!0}else for(;a>c;c++)if((t||c in i)&&i[c]===n)return t||c||0;return!t&&-1}},ht={includes:pt(!0),indexOf:pt(!1)},vt=ht.indexOf,yt=function(t,e){var n,r=y(t),o=0,i=[];for(n in r)!x(z,n)&&x(r,n)&&i.push(n);for(;e.length>o;)x(r,n=e[o++])&&(~vt(i,n)||i.push(n));return i},gt=["constructor","hasOwnProperty","isPrototypeOf","propertyIsEnumerable","toLocaleString","toString","valueOf"],bt=gt.concat("length","prototype"),mt={f:Object.getOwnPropertyNames||function(t){return yt(t,bt)}},xt={f:Object.getOwnPropertySymbols},Ot=it("Reflect","ownKeys")||function(t){var e=mt.f(P(t)),n=xt.f;return n?e.concat(n(t)):e},wt=function(t,e){for(var n=Ot(e),r=_.f,o=T.f,i=0;i=74)&&(J=Vt.match(/Chrome\/(\d+)/))&&(tt=J[1]);var Yt,Gt=tt&&+tt,Ht=Bt("species"),Qt=Bt("isConcatSpreadable"),Xt=Gt>=51||!i((function(){var t=[];return t[Qt]=!1,t.concat()[0]!==t})),Zt=(Yt="concat",Gt>=51||!i((function(){var t=[];return(t.constructor={})[Ht]=function(){return{foo:1}},1!==t[Yt](Boolean).foo}))),Jt=function(t){if(!g(t))return!1;var e=t[Qt];return void 0!==e?!!e:Ct(t)};Rt({target:"Array",proto:!0,forced:!Xt||!Zt},{concat:function(t){var e,n,r,o,i,a=kt(this),c=Kt(a,0),u=0;for(e=-1,r=arguments.length;e9007199254740991)throw TypeError("Maximum allowed index exceeded");for(n=0;n=9007199254740991)throw TypeError("Maximum allowed index exceeded");Mt(c,u++,i)}return c.length=u,c}});var te,ee=function(t,e,n){if(function(t){if("function"!=typeof t)throw TypeError(String(t)+" is not a function")}(t),void 0===e)return t;switch(n){case 0:return function(){return t.call(e)};case 1:return function(n){return t.call(e,n)};case 2:return function(n,r){return t.call(e,n,r)};case 3:return function(n,r,o){return t.call(e,n,r,o)}}return function(){return t.apply(e,arguments)}},ne=[].push,re=function(t){var e=1==t,n=2==t,r=3==t,o=4==t,i=6==t,a=5==t||i;return function(c,u,f,l){for(var s,d,p=kt(c),v=h(p),y=ee(u,f,3),g=lt(v.length),b=0,m=l||Kt,x=e?m(c,g):n?m(c,0):void 0;g>b;b++)if((a||b in v)&&(d=y(s=v[b],b,p),t))if(e)x[b]=d;else if(d)switch(t){case 3:return!0;case 5:return s;case 6:return b;case 2:ne.call(x,s)}else if(o)return!1;return i?-1:r||o?o:x}},oe={forEach:re(0),map:re(1),filter:re(2),some:re(3),every:re(4),find:re(5),findIndex:re(6)},ie=Object.keys||function(t){return yt(t,gt)},ae=a?Object.defineProperties:function(t,e){P(t);for(var n,r=ie(e),o=r.length,i=0;o>i;)_.f(t,n=r[i++],e[n]);return t},ce=it("document","documentElement"),ue=W("IE_PROTO"),fe=function(){},le=function(t){return" + + + + + {% endblock %} diff --git a/cps/templates/config_edit.html b/cps/templates/config_edit.html index 132db5cf..716ed7ba 100644 --- a/cps/templates/config_edit.html +++ b/cps/templates/config_edit.html @@ -15,13 +15,20 @@
-
- + +
+ {% if filepicker %} - + + {% endif %}
+ {% if not filepicker %} +
+ +
+ {% endif %} {% if feature_support['gdrive'] %}
@@ -60,10 +67,10 @@ {% else %} - Enable watch of metadata.db + Enable watch of metadata.db {% endif %} {% endif %} {% endif %} @@ -94,14 +101,14 @@
- +
- +
@@ -264,9 +271,26 @@
-
- + +
+ + + + +
+ +
+ + + +
+ +
+ + + +
@@ -314,6 +338,20 @@
+
+ + +
+
+
+ + +
+
+
{% endif %} {% if feature_support['oauth'] %} @@ -353,7 +391,7 @@
- +
@@ -364,7 +402,7 @@
- +
{% if feature_support['rar'] %} @@ -372,7 +410,7 @@
- +
{% endif %} @@ -381,8 +419,6 @@
{% endif %}
- -
{% if not show_login_button %} @@ -397,6 +433,9 @@
{% endblock %} +{% block modal %} +{{ filechooser_modal() }} +{% endblock %} {% block js %} @@ -205,14 +198,7 @@ - - {% if g.current_theme == 1 %} - - - - - {% endif %} + + {% if g.current_theme == 1 %} + + + + + {% endif %} {% block js %}{% endblock %} diff --git a/cps/templates/list.html b/cps/templates/list.html index a2938be1..53a712d1 100644 --- a/cps/templates/list.html +++ b/cps/templates/list.html @@ -8,8 +8,8 @@ {% endif %} {% endif %} - - + + {% if charlist|length %} {% endif %} @@ -20,7 +20,7 @@
{% if data == "series" %} - + {% endif %}
@@ -32,7 +32,7 @@ {% endif %}
{{entry.count}}
-
+
{% if entry.name %}
{% for number in range(entry.name) %} diff --git a/cps/templates/login.html b/cps/templates/login.html index a031d949..7ae56d1b 100644 --- a/cps/templates/login.html +++ b/cps/templates/login.html @@ -22,7 +22,7 @@ {% endif %} {% if config.config_remote_login %} - {{_('Log in with Magic Link')}} + {{_('Log in with Magic Link')}} {% endif %} {% if config.config_login_type == 2 %} {% if 1 in oauth_check %} diff --git a/cps/templates/logviewer.html b/cps/templates/logviewer.html index e74ec27e..9827d15e 100644 --- a/cps/templates/logviewer.html +++ b/cps/templates/logviewer.html @@ -12,7 +12,18 @@ {{logfiles[1]}}
{% endif %}
+
+
+ {% if log_enable %} + + {% endif %} + {% if accesslog_enable %} + + {% endif %} +
+
+ {% endblock %} {% block js %} diff --git a/cps/templates/modal_dialogs.html b/cps/templates/modal_dialogs.html new file mode 100644 index 00000000..da00834c --- /dev/null +++ b/cps/templates/modal_dialogs.html @@ -0,0 +1,123 @@ +{% macro restrict_modal() %} + +{% endmacro %} +{% macro delete_book() %} +{% if g.user.role_delete_books() %} + +{% endif %} +{% endmacro %} +{% macro filechooser_modal() %} + +{% endmacro %} + +{% macro delete_confirm_modal() %} + + +{% endmacro %} diff --git a/cps/templates/modal_restriction.html b/cps/templates/modal_restriction.html deleted file mode 100644 index 26aa2ee1..00000000 --- a/cps/templates/modal_restriction.html +++ /dev/null @@ -1,39 +0,0 @@ -{% macro restrict_modal() %} - -{% endmacro %} diff --git a/cps/templates/read.html b/cps/templates/read.html index a4361fa0..b38f783c 100644 --- a/cps/templates/read.html +++ b/cps/templates/read.html @@ -78,7 +78,7 @@
- + - - - diff --git a/cps/templates/readcbr.html b/cps/templates/readcbr.html index 14b9752c..35943b34 100644 --- a/cps/templates/readcbr.html +++ b/cps/templates/readcbr.html @@ -13,9 +13,10 @@ + - + +