GITENV file updated

parent 89acd98c
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
/cgpenv/Lib
\ No newline at end of file
Metadata-Version: 1.1
Name: args
Version: 0.1.0
Summary: Command Arguments for Humans.
Home-page: https://github.com/kennethreitz/args
Author: Kenneth Reitz
Author-email: me@kennethreitz.com
License: BSD
Description:
args
~~~~
This simple module gives you an elegant interface for your command line argumemnts.
Platform: any
Classifier: Environment :: Web Environment
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content
Classifier: Topic :: Software Development :: Libraries :: Python Modules
args.py
setup.cfg
setup.py
args.egg-info/PKG-INFO
args.egg-info/SOURCES.txt
args.egg-info/dependency_links.txt
args.egg-info/not-zip-safe
args.egg-info/top_level.txt
\ No newline at end of file
..\__pycache__\args.cpython-38.pyc
..\args.py
PKG-INFO
SOURCES.txt
dependency_links.txt
not-zip-safe
top_level.txt
Copyright (c) Django Software Foundation and individual contributors.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of Django nor the names of its contributors may be used
to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Metadata-Version: 2.1
Name: asgiref
Version: 3.4.1
Summary: ASGI specs, helper code, and adapters
Home-page: https://github.com/django/asgiref/
Author: Django Software Foundation
Author-email: foundation@djangoproject.com
License: BSD
Project-URL: Documentation, https://asgi.readthedocs.io/
Project-URL: Further Documentation, https://docs.djangoproject.com/en/stable/topics/async/#async-adapter-functions
Project-URL: Changelog, https://github.com/django/asgiref/blob/master/CHANGELOG.txt
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Web Environment
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Topic :: Internet :: WWW/HTTP
Requires-Python: >=3.6
License-File: LICENSE
Requires-Dist: typing-extensions ; python_version < "3.8"
Provides-Extra: tests
Requires-Dist: pytest ; extra == 'tests'
Requires-Dist: pytest-asyncio ; extra == 'tests'
Requires-Dist: mypy (>=0.800) ; extra == 'tests'
asgiref
=======
.. image:: https://api.travis-ci.org/django/asgiref.svg
:target: https://travis-ci.org/django/asgiref
.. image:: https://img.shields.io/pypi/v/asgiref.svg
:target: https://pypi.python.org/pypi/asgiref
ASGI is a standard for Python asynchronous web apps and servers to communicate
with each other, and positioned as an asynchronous successor to WSGI. You can
read more at https://asgi.readthedocs.io/en/latest/
This package includes ASGI base libraries, such as:
* Sync-to-async and async-to-sync function wrappers, ``asgiref.sync``
* Server base classes, ``asgiref.server``
* A WSGI-to-ASGI adapter, in ``asgiref.wsgi``
Function wrappers
-----------------
These allow you to wrap or decorate async or sync functions to call them from
the other style (so you can call async functions from a synchronous thread,
or vice-versa).
In particular:
* AsyncToSync lets a synchronous subthread stop and wait while the async
function is called on the main thread's event loop, and then control is
returned to the thread when the async function is finished.
* SyncToAsync lets async code call a synchronous function, which is run in
a threadpool and control returned to the async coroutine when the synchronous
function completes.
The idea is to make it easier to call synchronous APIs from async code and
asynchronous APIs from synchronous code so it's easier to transition code from
one style to the other. In the case of Channels, we wrap the (synchronous)
Django view system with SyncToAsync to allow it to run inside the (asynchronous)
ASGI server.
Note that exactly what threads things run in is very specific, and aimed to
keep maximum compatibility with old synchronous code. See
"Synchronous code & Threads" below for a full explanation. By default,
``sync_to_async`` will run all synchronous code in the program in the same
thread for safety reasons; you can disable this for more performance with
``@sync_to_async(thread_sensitive=False)``, but make sure that your code does
not rely on anything bound to threads (like database connections) when you do.
Threadlocal replacement
-----------------------
This is a drop-in replacement for ``threading.local`` that works with both
threads and asyncio Tasks. Even better, it will proxy values through from a
task-local context to a thread-local context when you use ``sync_to_async``
to run things in a threadpool, and vice-versa for ``async_to_sync``.
If you instead want true thread- and task-safety, you can set
``thread_critical`` on the Local object to ensure this instead.
Server base classes
-------------------
Includes a ``StatelessServer`` class which provides all the hard work of
writing a stateless server (as in, does not handle direct incoming sockets
but instead consumes external streams or sockets to work out what is happening).
An example of such a server would be a chatbot server that connects out to
a central chat server and provides a "connection scope" per user chatting to
it. There's only one actual connection, but the server has to separate things
into several scopes for easier writing of the code.
You can see an example of this being used in `frequensgi <https://github.com/andrewgodwin/frequensgi>`_.
WSGI-to-ASGI adapter
--------------------
Allows you to wrap a WSGI application so it appears as a valid ASGI application.
Simply wrap it around your WSGI application like so::
asgi_application = WsgiToAsgi(wsgi_application)
The WSGI application will be run in a synchronous threadpool, and the wrapped
ASGI application will be one that accepts ``http`` class messages.
Please note that not all extended features of WSGI may be supported (such as
file handles for incoming POST bodies).
Dependencies
------------
``asgiref`` requires Python 3.6 or higher.
Contributing
------------
Please refer to the
`main Channels contributing docs <https://github.com/django/channels/blob/master/CONTRIBUTING.rst>`_.
Testing
'''''''
To run tests, make sure you have installed the ``tests`` extra with the package::
cd asgiref/
pip install -e .[tests]
pytest
Building the documentation
''''''''''''''''''''''''''
The documentation uses `Sphinx <http://www.sphinx-doc.org>`_::
cd asgiref/docs/
pip install sphinx
To build the docs, you can use the default tools::
sphinx-build -b html . _build/html # or `make html`, if you've got make set up
cd _build/html
python -m http.server
...or you can use ``sphinx-autobuild`` to run a server and rebuild/reload
your documentation changes automatically::
pip install sphinx-autobuild
sphinx-autobuild . _build/html
Releasing
'''''''''
To release, first add details to CHANGELOG.txt and update the version number in ``asgiref/__init__.py``.
Then, build and push the packages::
python -m build
twine upload dist/*
rm -r build/ dist/
Implementation Details
----------------------
Synchronous code & threads
''''''''''''''''''''''''''
The ``asgiref.sync`` module provides two wrappers that let you go between
asynchronous and synchronous code at will, while taking care of the rough edges
for you.
Unfortunately, the rough edges are numerous, and the code has to work especially
hard to keep things in the same thread as much as possible. Notably, the
restrictions we are working with are:
* All synchronous code called through ``SyncToAsync`` and marked with
``thread_sensitive`` should run in the same thread as each other (and if the
outer layer of the program is synchronous, the main thread)
* If a thread already has a running async loop, ``AsyncToSync`` can't run things
on that loop if it's blocked on synchronous code that is above you in the
call stack.
The first compromise you get to might be that ``thread_sensitive`` code should
just run in the same thread and not spawn in a sub-thread, fulfilling the first
restriction, but that immediately runs you into the second restriction.
The only real solution is to essentially have a variant of ThreadPoolExecutor
that executes any ``thread_sensitive`` code on the outermost synchronous
thread - either the main thread, or a single spawned subthread.
This means you now have two basic states:
* If the outermost layer of your program is synchronous, then all async code
run through ``AsyncToSync`` will run in a per-call event loop in arbitrary
sub-threads, while all ``thread_sensitive`` code will run in the main thread.
* If the outermost layer of your program is asynchronous, then all async code
runs on the main thread's event loop, and all ``thread_sensitive`` synchronous
code will run in a single shared sub-thread.
Crucially, this means that in both cases there is a thread which is a shared
resource that all ``thread_sensitive`` code must run on, and there is a chance
that this thread is currently blocked on its own ``AsyncToSync`` call. Thus,
``AsyncToSync`` needs to act as an executor for thread code while it's blocking.
The ``CurrentThreadExecutor`` class provides this functionality; rather than
simply waiting on a Future, you can call its ``run_until_future`` method and
it will run submitted code until that Future is done. This means that code
inside the call can then run code on your thread.
Maintenance and Security
------------------------
To report security issues, please contact security@djangoproject.com. For GPG
signatures and more security process information, see
https://docs.djangoproject.com/en/dev/internals/security/.
To report bugs or request new features, please open a new GitHub issue.
This repository is part of the Channels project. For the shepherd and maintenance team, please see the
`main Channels readme <https://github.com/django/channels/blob/master/README.rst>`_.
asgiref-3.4.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
asgiref-3.4.1.dist-info/LICENSE,sha256=uEZBXRtRTpwd_xSiLeuQbXlLxUbKYSn5UKGM0JHipmk,1552
asgiref-3.4.1.dist-info/METADATA,sha256=TZrVDUz2BP8ewHAkOyGQ8izCsiaq6YHMvj_TW5W7i2E,9162
asgiref-3.4.1.dist-info/RECORD,,
asgiref-3.4.1.dist-info/WHEEL,sha256=OqRkF0eY5GHssMorFjlbTIq072vpHpF60fIQA6lS9xA,92
asgiref-3.4.1.dist-info/top_level.txt,sha256=bokQjCzwwERhdBiPdvYEZa4cHxT4NCeAffQNUqJ8ssg,8
asgiref/__init__.py,sha256=z3MJNttjzZJkd4Yv_Ut_X2qO_gIKi4TijrHVpefXuRM,22
asgiref/__pycache__/__init__.cpython-38.pyc,,
asgiref/__pycache__/_pep562.cpython-38.pyc,,
asgiref/__pycache__/compatibility.cpython-38.pyc,,
asgiref/__pycache__/current_thread_executor.cpython-38.pyc,,
asgiref/__pycache__/local.cpython-38.pyc,,
asgiref/__pycache__/server.cpython-38.pyc,,
asgiref/__pycache__/sync.cpython-38.pyc,,
asgiref/__pycache__/testing.cpython-38.pyc,,
asgiref/__pycache__/timeout.cpython-38.pyc,,
asgiref/__pycache__/typing.cpython-38.pyc,,
asgiref/__pycache__/wsgi.cpython-38.pyc,,
asgiref/_pep562.py,sha256=fyD3JhfLtViIGeXBtvhhbnbQ-R_8-nmwzbXHhncY6ow,2684
asgiref/compatibility.py,sha256=4Plx8PT3wlDzZeuCN2cATfaXq6rya1OuSdJ262I5S6Y,2022
asgiref/current_thread_executor.py,sha256=oeH8zv2tTmcbpxdUmOSMzbEXzeY5nJzIMFvzprE95gA,2801
asgiref/local.py,sha256=D9kRIDARSUixNbxK8HL2O8vFhRCx_fc3fFB9uv0vG-g,4892
asgiref/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
asgiref/server.py,sha256=lAxZxOxkdvxB073ZtYOAzN1JZ8aV-DOiFpVQJZ0X2FI,6018
asgiref/sync.py,sha256=LCEHMPNiuoVtKrmI0kksoHeFjoSS5zLEG-n95UNd91Q,20310
asgiref/testing.py,sha256=3byNRV7Oto_Fg8Z-fErQJ3yGf7OQlcUexbN_cDQugzQ,3119
asgiref/timeout.py,sha256=UUYuUSY30dsqBsVzVAS7z9raQ9ntZGktScJw_Y_9iSU,3889
asgiref/typing.py,sha256=-2wmtHqkhzV52rbMfipGTJmo8jUoU0i5AQECFH6y7aY,6722
asgiref/wsgi.py,sha256=-L0eo_uK_dq7EPjv1meW1BRGytURaO9NPESxnJc9CtA,6575
Wheel-Version: 1.0
Generator: bdist_wheel (0.36.2)
Root-Is-Purelib: true
Tag: py3-none-any
"""
Backport of PEP 562.
https://pypi.org/search/?q=pep562
Licensed under MIT
Copyright (c) 2018 Isaac Muse <isaacmuse@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
"""
import sys
from typing import Any, Callable, List, Optional
class Pep562:
"""
Backport of PEP 562 <https://pypi.org/search/?q=pep562>.
Wraps the module in a class that exposes the mechanics to override `__dir__` and `__getattr__`.
The given module will be searched for overrides of `__dir__` and `__getattr__` and use them when needed.
"""
def __init__(self, name: str) -> None:
"""Acquire `__getattr__` and `__dir__`, but only replace module for versions less than Python 3.7."""
self._module = sys.modules[name]
self._get_attr = getattr(self._module, "__getattr__", None)
self._get_dir: Optional[Callable[..., List[str]]] = getattr(
self._module, "__dir__", None
)
sys.modules[name] = self # type: ignore[assignment]
def __dir__(self) -> List[str]:
"""Return the overridden `dir` if one was provided, else apply `dir` to the module."""
return self._get_dir() if self._get_dir else dir(self._module)
def __getattr__(self, name: str) -> Any:
"""
Attempt to retrieve the attribute from the module, and if missing, use the overridden function if present.
"""
try:
return getattr(self._module, name)
except AttributeError:
if self._get_attr:
return self._get_attr(name)
raise
def pep562(module_name: str) -> None:
"""Helper function to apply PEP 562."""
if sys.version_info < (3, 7):
Pep562(module_name)
import asyncio
import inspect
import sys
def is_double_callable(application):
"""
Tests to see if an application is a legacy-style (double-callable) application.
"""
# Look for a hint on the object first
if getattr(application, "_asgi_single_callable", False):
return False
if getattr(application, "_asgi_double_callable", False):
return True
# Uninstanted classes are double-callable
if inspect.isclass(application):
return True
# Instanted classes depend on their __call__
if hasattr(application, "__call__"):
# We only check to see if its __call__ is a coroutine function -
# if it's not, it still might be a coroutine function itself.
if asyncio.iscoroutinefunction(application.__call__):
return False
# Non-classes we just check directly
return not asyncio.iscoroutinefunction(application)
def double_to_single_callable(application):
"""
Transforms a double-callable ASGI application into a single-callable one.
"""
async def new_application(scope, receive, send):
instance = application(scope)
return await instance(receive, send)
return new_application
def guarantee_single_callable(application):
"""
Takes either a single- or double-callable application and always returns it
in single-callable style. Use this to add backwards compatibility for ASGI
2.0 applications to your server/test harness/etc.
"""
if is_double_callable(application):
application = double_to_single_callable(application)
return application
if sys.version_info >= (3, 7):
# these were introduced in 3.7
get_running_loop = asyncio.get_running_loop
run_future = asyncio.run
create_task = asyncio.create_task
else:
# marked as deprecated in 3.10, did not exist before 3.7
get_running_loop = asyncio.get_event_loop
run_future = asyncio.ensure_future
# does nothing, this is fine for <3.7
create_task = lambda task: task
import queue
import threading
from concurrent.futures import Executor, Future
class _WorkItem:
"""
Represents an item needing to be run in the executor.
Copied from ThreadPoolExecutor (but it's private, so we're not going to rely on importing it)
"""
def __init__(self, future, fn, args, kwargs):
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self):
if not self.future.set_running_or_notify_cancel():
return
try:
result = self.fn(*self.args, **self.kwargs)
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None
else:
self.future.set_result(result)
class CurrentThreadExecutor(Executor):
"""
An Executor that actually runs code in the thread it is instantiated in.
Passed to other threads running async code, so they can run sync code in
the thread they came from.
"""
def __init__(self):
self._work_thread = threading.current_thread()
self._work_queue = queue.Queue()
self._broken = False
def run_until_future(self, future):
"""
Runs the code in the work queue until a result is available from the future.
Should be run from the thread the executor is initialised in.
"""
# Check we're in the right thread
if threading.current_thread() != self._work_thread:
raise RuntimeError(
"You cannot run CurrentThreadExecutor from a different thread"
)
future.add_done_callback(self._work_queue.put)
# Keep getting and running work items until we get the future we're waiting for
# back via the future's done callback.
try:
while True:
# Get a work item and run it
work_item = self._work_queue.get()
if work_item is future:
return
work_item.run()
del work_item
finally:
self._broken = True
def submit(self, fn, *args, **kwargs):
# Check they're not submitting from the same thread
if threading.current_thread() == self._work_thread:
raise RuntimeError(
"You cannot submit onto CurrentThreadExecutor from its own thread"
)
# Check they're not too late or the executor errored
if self._broken:
raise RuntimeError("CurrentThreadExecutor already quit or is broken")
# Add to work queue
f = Future()
work_item = _WorkItem(f, fn, args, kwargs)
self._work_queue.put(work_item)
# Return the future
return f
import random
import string
import sys
import threading
import weakref
class Local:
"""
A drop-in replacement for threading.locals that also works with asyncio
Tasks (via the current_task asyncio method), and passes locals through
sync_to_async and async_to_sync.
Specifically:
- Locals work per-coroutine on any thread not spawned using asgiref
- Locals work per-thread on any thread not spawned using asgiref
- Locals are shared with the parent coroutine when using sync_to_async
- Locals are shared with the parent thread when using async_to_sync
(and if that thread was launched using sync_to_async, with its parent
coroutine as well, with this working for indefinite levels of nesting)
Set thread_critical to True to not allow locals to pass from an async Task
to a thread it spawns. This is needed for code that truly needs
thread-safety, as opposed to things used for helpful context (e.g. sqlite
does not like being called from a different thread to the one it is from).
Thread-critical code will still be differentiated per-Task within a thread
as it is expected it does not like concurrent access.
This doesn't use contextvars as it needs to support 3.6. Once it can support
3.7 only, we can then reimplement the storage more nicely.
"""
CLEANUP_INTERVAL = 60 # seconds
def __init__(self, thread_critical: bool = False) -> None:
self._thread_critical = thread_critical
self._thread_lock = threading.RLock()
self._context_refs: "weakref.WeakSet[object]" = weakref.WeakSet()
# Random suffixes stop accidental reuse between different Locals,
# though we try to force deletion as well.
self._attr_name = "_asgiref_local_impl_{}_{}".format(
id(self),
"".join(random.choice(string.ascii_letters) for i in range(8)),
)
def _get_context_id(self):
"""
Get the ID we should use for looking up variables
"""
# Prevent a circular reference
from .sync import AsyncToSync, SyncToAsync
# First, pull the current task if we can
context_id = SyncToAsync.get_current_task()
context_is_async = True
# OK, let's try for a thread ID
if context_id is None:
context_id = threading.current_thread()
context_is_async = False
# If we're thread-critical, we stop here, as we can't share contexts.
if self._thread_critical:
return context_id
# Now, take those and see if we can resolve them through the launch maps
for i in range(sys.getrecursionlimit()):
try:
if context_is_async:
# Tasks have a source thread in AsyncToSync
context_id = AsyncToSync.launch_map[context_id]
context_is_async = False
else:
# Threads have a source task in SyncToAsync
context_id = SyncToAsync.launch_map[context_id]
context_is_async = True
except KeyError:
break
else:
# Catch infinite loops (they happen if you are screwing around
# with AsyncToSync implementations)
raise RuntimeError("Infinite launch_map loops")
return context_id
def _get_storage(self):
context_obj = self._get_context_id()
if not hasattr(context_obj, self._attr_name):
setattr(context_obj, self._attr_name, {})
self._context_refs.add(context_obj)
return getattr(context_obj, self._attr_name)
def __del__(self):
try:
for context_obj in self._context_refs:
try:
delattr(context_obj, self._attr_name)
except AttributeError:
pass
except TypeError:
# WeakSet.__iter__ can crash when interpreter is shutting down due
# to _IterationGuard being None.
pass
def __getattr__(self, key):
with self._thread_lock:
storage = self._get_storage()
if key in storage:
return storage[key]
else:
raise AttributeError(f"{self!r} object has no attribute {key!r}")
def __setattr__(self, key, value):
if key in ("_context_refs", "_thread_critical", "_thread_lock", "_attr_name"):
return super().__setattr__(key, value)
with self._thread_lock:
storage = self._get_storage()
storage[key] = value
def __delattr__(self, key):
with self._thread_lock:
storage = self._get_storage()
if key in storage:
del storage[key]
else:
raise AttributeError(f"{self!r} object has no attribute {key!r}")
import asyncio
import logging
import time
import traceback
from .compatibility import get_running_loop, guarantee_single_callable, run_future
logger = logging.getLogger(__name__)
class StatelessServer:
"""
Base server class that handles basic concepts like application instance
creation/pooling, exception handling, and similar, for stateless protocols
(i.e. ones without actual incoming connections to the process)
Your code should override the handle() method, doing whatever it needs to,
and calling get_or_create_application_instance with a unique `scope_id`
and `scope` for the scope it wants to get.
If an application instance is found with the same `scope_id`, you are
given its input queue, otherwise one is made for you with the scope provided
and you are given that fresh new input queue. Either way, you should do
something like:
input_queue = self.get_or_create_application_instance(
"user-123456",
{"type": "testprotocol", "user_id": "123456", "username": "andrew"},
)
input_queue.put_nowait(message)
If you try and create an application instance and there are already
`max_application` instances, the oldest/least recently used one will be
reclaimed and shut down to make space.
Application coroutines that error will be found periodically (every 100ms
by default) and have their exceptions printed to the console. Override
application_exception() if you want to do more when this happens.
If you override run(), make sure you handle things like launching the
application checker.
"""
application_checker_interval = 0.1
def __init__(self, application, max_applications=1000):
# Parameters
self.application = application
self.max_applications = max_applications
# Initialisation
self.application_instances = {}
### Mainloop and handling
def run(self):
"""
Runs the asyncio event loop with our handler loop.
"""
event_loop = get_running_loop()
asyncio.ensure_future(self.application_checker())
try:
event_loop.run_until_complete(self.handle())
except KeyboardInterrupt:
logger.info("Exiting due to Ctrl-C/interrupt")
async def handle(self):
raise NotImplementedError("You must implement handle()")
async def application_send(self, scope, message):
"""
Receives outbound sends from applications and handles them.
"""
raise NotImplementedError("You must implement application_send()")
### Application instance management
def get_or_create_application_instance(self, scope_id, scope):
"""
Creates an application instance and returns its queue.
"""
if scope_id in self.application_instances:
self.application_instances[scope_id]["last_used"] = time.time()
return self.application_instances[scope_id]["input_queue"]
# See if we need to delete an old one
while len(self.application_instances) > self.max_applications:
self.delete_oldest_application_instance()
# Make an instance of the application
input_queue = asyncio.Queue()
application_instance = guarantee_single_callable(self.application)
# Run it, and stash the future for later checking
future = run_future(
application_instance(
scope=scope,
receive=input_queue.get,
send=lambda message: self.application_send(scope, message),
),
)
self.application_instances[scope_id] = {
"input_queue": input_queue,
"future": future,
"scope": scope,
"last_used": time.time(),
}
return input_queue
def delete_oldest_application_instance(self):
"""
Finds and deletes the oldest application instance
"""
oldest_time = min(
details["last_used"] for details in self.application_instances.values()
)
for scope_id, details in self.application_instances.items():
if details["last_used"] == oldest_time:
self.delete_application_instance(scope_id)
# Return to make sure we only delete one in case two have
# the same oldest time
return
def delete_application_instance(self, scope_id):
"""
Removes an application instance (makes sure its task is stopped,
then removes it from the current set)
"""
details = self.application_instances[scope_id]
del self.application_instances[scope_id]
if not details["future"].done():
details["future"].cancel()
async def application_checker(self):
"""
Goes through the set of current application instance Futures and cleans up
any that are done/prints exceptions for any that errored.
"""
while True:
await asyncio.sleep(self.application_checker_interval)
for scope_id, details in list(self.application_instances.items()):
if details["future"].done():
exception = details["future"].exception()
if exception:
await self.application_exception(exception, details)
try:
del self.application_instances[scope_id]
except KeyError:
# Exception handling might have already got here before us. That's fine.
pass
async def application_exception(self, exception, application_details):
"""
Called whenever an application coroutine has an exception.
"""
logging.error(
"Exception inside application: %s\n%s%s",
exception,
"".join(traceback.format_tb(exception.__traceback__)),
f" {exception}",
)
This diff is collapsed.
import asyncio
import time
from .compatibility import guarantee_single_callable
from .timeout import timeout as async_timeout
class ApplicationCommunicator:
"""
Runs an ASGI application in a test mode, allowing sending of
messages to it and retrieval of messages it sends.
"""
def __init__(self, application, scope):
self.application = guarantee_single_callable(application)
self.scope = scope
self.input_queue = asyncio.Queue()
self.output_queue = asyncio.Queue()
self.future = asyncio.ensure_future(
self.application(scope, self.input_queue.get, self.output_queue.put)
)
async def wait(self, timeout=1):
"""
Waits for the application to stop itself and returns any exceptions.
"""
try:
async with async_timeout(timeout):
try:
await self.future
self.future.result()
except asyncio.CancelledError:
pass
finally:
if not self.future.done():
self.future.cancel()
try:
await self.future
except asyncio.CancelledError:
pass
def stop(self, exceptions=True):
if not self.future.done():
self.future.cancel()
elif exceptions:
# Give a chance to raise any exceptions
self.future.result()
def __del__(self):
# Clean up on deletion
try:
self.stop(exceptions=False)
except RuntimeError:
# Event loop already stopped
pass
async def send_input(self, message):
"""
Sends a single message to the application
"""
# Give it the message
await self.input_queue.put(message)
async def receive_output(self, timeout=1):
"""
Receives a single message from the application, with optional timeout.
"""
# Make sure there's not an exception to raise from the task
if self.future.done():
self.future.result()
# Wait and receive the message
try:
async with async_timeout(timeout):
return await self.output_queue.get()
except asyncio.TimeoutError as e:
# See if we have another error to raise inside
if self.future.done():
self.future.result()
else:
self.future.cancel()
try:
await self.future
except asyncio.CancelledError:
pass
raise e
async def receive_nothing(self, timeout=0.1, interval=0.01):
"""
Checks that there is no message to receive in the given time.
"""
# `interval` has precedence over `timeout`
start = time.monotonic()
while time.monotonic() - start < timeout:
if not self.output_queue.empty():
return False
await asyncio.sleep(interval)
return self.output_queue.empty()
# This code is originally sourced from the aio-libs project "async_timeout",
# under the Apache 2.0 license. You may see the original project at
# https://github.com/aio-libs/async-timeout
# It is vendored here to reduce chain-dependencies on this library, and
# modified slightly to remove some features we don't use.
import asyncio
import sys
from types import TracebackType
from typing import Any, Optional, Type
class timeout:
"""timeout context manager.
Useful in cases when you want to apply timeout logic around block
of code or in cases when asyncio.wait_for is not suitable. For example:
>>> with timeout(0.001):
... async with aiohttp.get('https://github.com') as r:
... await r.text()
timeout - value in seconds or None to disable timeout logic
loop - asyncio compatible event loop
"""
def __init__(
self,
timeout: Optional[float],
*,
loop: Optional[asyncio.AbstractEventLoop] = None,
) -> None:
self._timeout = timeout
if loop is None:
loop = asyncio.get_event_loop()
self._loop = loop
self._task = None # type: Optional[asyncio.Task[Any]]
self._cancelled = False
self._cancel_handler = None # type: Optional[asyncio.Handle]
self._cancel_at = None # type: Optional[float]
def __enter__(self) -> "timeout":
return self._do_enter()
def __exit__(
self,
exc_type: Type[BaseException],
exc_val: BaseException,
exc_tb: TracebackType,
) -> Optional[bool]:
self._do_exit(exc_type)
return None
async def __aenter__(self) -> "timeout":
return self._do_enter()
async def __aexit__(
self,
exc_type: Type[BaseException],
exc_val: BaseException,
exc_tb: TracebackType,
) -> None:
self._do_exit(exc_type)
@property
def expired(self) -> bool:
return self._cancelled
@property
def remaining(self) -> Optional[float]:
if self._cancel_at is not None:
return max(self._cancel_at - self._loop.time(), 0.0)
else:
return None
def _do_enter(self) -> "timeout":
# Support Tornado 5- without timeout
# Details: https://github.com/python/asyncio/issues/392
if self._timeout is None:
return self
self._task = current_task(self._loop)
if self._task is None:
raise RuntimeError(
"Timeout context manager should be used " "inside a task"
)
if self._timeout <= 0:
self._loop.call_soon(self._cancel_task)
return self
self._cancel_at = self._loop.time() + self._timeout
self._cancel_handler = self._loop.call_at(self._cancel_at, self._cancel_task)
return self
def _do_exit(self, exc_type: Type[BaseException]) -> None:
if exc_type is asyncio.CancelledError and self._cancelled:
self._cancel_handler = None
self._task = None
raise asyncio.TimeoutError
if self._timeout is not None and self._cancel_handler is not None:
self._cancel_handler.cancel()
self._cancel_handler = None
self._task = None
return None
def _cancel_task(self) -> None:
if self._task is not None:
self._task.cancel()
self._cancelled = True
def current_task(loop: asyncio.AbstractEventLoop) -> "Optional[asyncio.Task[Any]]":
if sys.version_info >= (3, 7):
task = asyncio.current_task(loop=loop)
else:
task = asyncio.Task.current_task(loop=loop)
if task is None:
# this should be removed, tokio must use register_task and family API
fn = getattr(loop, "current_task", None)
if fn is not None:
task = fn()
return task
import sys
import warnings
from typing import (
Any,
Awaitable,
Callable,
Dict,
Iterable,
List,
Optional,
Tuple,
Type,
Union,
)
from asgiref._pep562 import pep562
if sys.version_info >= (3, 8):
from typing import Literal, Protocol, TypedDict
else:
from typing_extensions import Literal, Protocol, TypedDict
__all__ = (
"ASGIVersions",
"HTTPScope",
"WebSocketScope",
"LifespanScope",
"WWWScope",
"Scope",
"HTTPRequestEvent",
"HTTPResponseStartEvent",
"HTTPResponseBodyEvent",
"HTTPServerPushEvent",
"HTTPDisconnectEvent",
"WebSocketConnectEvent",
"WebSocketAcceptEvent",
"WebSocketReceiveEvent",
"WebSocketSendEvent",
"WebSocketResponseStartEvent",
"WebSocketResponseBodyEvent",
"WebSocketDisconnectEvent",
"WebSocketCloseEvent",
"LifespanStartupEvent",
"LifespanShutdownEvent",
"LifespanStartupCompleteEvent",
"LifespanStartupFailedEvent",
"LifespanShutdownCompleteEvent",
"LifespanShutdownFailedEvent",
"ASGIReceiveEvent",
"ASGISendEvent",
"ASGIReceiveCallable",
"ASGISendCallable",
"ASGI2Protocol",
"ASGI2Application",
"ASGI3Application",
"ASGIApplication",
)
class ASGIVersions(TypedDict):
spec_version: str
version: Union[Literal["2.0"], Literal["3.0"]]
class HTTPScope(TypedDict):
type: Literal["http"]
asgi: ASGIVersions
http_version: str
method: str
scheme: str
path: str
raw_path: bytes
query_string: bytes
root_path: str
headers: Iterable[Tuple[bytes, bytes]]
client: Optional[Tuple[str, int]]
server: Optional[Tuple[str, Optional[int]]]
extensions: Optional[Dict[str, Dict[object, object]]]
class WebSocketScope(TypedDict):
type: Literal["websocket"]
asgi: ASGIVersions
http_version: str
scheme: str
path: str
raw_path: bytes
query_string: bytes
root_path: str
headers: Iterable[Tuple[bytes, bytes]]
client: Optional[Tuple[str, int]]
server: Optional[Tuple[str, Optional[int]]]
subprotocols: Iterable[str]
extensions: Optional[Dict[str, Dict[object, object]]]
class LifespanScope(TypedDict):
type: Literal["lifespan"]
asgi: ASGIVersions
WWWScope = Union[HTTPScope, WebSocketScope]
Scope = Union[HTTPScope, WebSocketScope, LifespanScope]
class HTTPRequestEvent(TypedDict):
type: Literal["http.request"]
body: bytes
more_body: bool
class HTTPResponseStartEvent(TypedDict):
type: Literal["http.response.start"]
status: int
headers: Iterable[Tuple[bytes, bytes]]
class HTTPResponseBodyEvent(TypedDict):
type: Literal["http.response.body"]
body: bytes
more_body: bool
class HTTPServerPushEvent(TypedDict):
type: Literal["http.response.push"]
path: str
headers: Iterable[Tuple[bytes, bytes]]
class HTTPDisconnectEvent(TypedDict):
type: Literal["http.disconnect"]
class WebSocketConnectEvent(TypedDict):
type: Literal["websocket.connect"]
class WebSocketAcceptEvent(TypedDict):
type: Literal["websocket.accept"]
subprotocol: Optional[str]
headers: Iterable[Tuple[bytes, bytes]]
class WebSocketReceiveEvent(TypedDict):
type: Literal["websocket.receive"]
bytes: Optional[bytes]
text: Optional[str]
class WebSocketSendEvent(TypedDict):
type: Literal["websocket.send"]
bytes: Optional[bytes]
text: Optional[str]
class WebSocketResponseStartEvent(TypedDict):
type: Literal["websocket.http.response.start"]
status: int
headers: Iterable[Tuple[bytes, bytes]]
class WebSocketResponseBodyEvent(TypedDict):
type: Literal["websocket.http.response.body"]
body: bytes
more_body: bool
class WebSocketDisconnectEvent(TypedDict):
type: Literal["websocket.disconnect"]
code: int
class WebSocketCloseEvent(TypedDict):
type: Literal["websocket.close"]
code: int
reason: Optional[str]
class LifespanStartupEvent(TypedDict):
type: Literal["lifespan.startup"]
class LifespanShutdownEvent(TypedDict):
type: Literal["lifespan.shutdown"]
class LifespanStartupCompleteEvent(TypedDict):
type: Literal["lifespan.startup.complete"]
class LifespanStartupFailedEvent(TypedDict):
type: Literal["lifespan.startup.failed"]
message: str
class LifespanShutdownCompleteEvent(TypedDict):
type: Literal["lifespan.shutdown.complete"]
class LifespanShutdownFailedEvent(TypedDict):
type: Literal["lifespan.shutdown.failed"]
message: str
ASGIReceiveEvent = Union[
HTTPRequestEvent,
HTTPDisconnectEvent,
WebSocketConnectEvent,
WebSocketReceiveEvent,
WebSocketDisconnectEvent,
LifespanStartupEvent,
LifespanShutdownEvent,
]
ASGISendEvent = Union[
HTTPResponseStartEvent,
HTTPResponseBodyEvent,
HTTPServerPushEvent,
HTTPDisconnectEvent,
WebSocketAcceptEvent,
WebSocketSendEvent,
WebSocketResponseStartEvent,
WebSocketResponseBodyEvent,
WebSocketCloseEvent,
LifespanStartupCompleteEvent,
LifespanStartupFailedEvent,
LifespanShutdownCompleteEvent,
LifespanShutdownFailedEvent,
]
ASGIReceiveCallable = Callable[[], Awaitable[ASGIReceiveEvent]]
ASGISendCallable = Callable[[ASGISendEvent], Awaitable[None]]
class ASGI2Protocol(Protocol):
def __init__(self, scope: Scope) -> None:
...
async def __call__(
self, receive: ASGIReceiveCallable, send: ASGISendCallable
) -> None:
...
ASGI2Application = Type[ASGI2Protocol]
ASGI3Application = Callable[
[
Scope,
ASGIReceiveCallable,
ASGISendCallable,
],
Awaitable[None],
]
ASGIApplication = Union[ASGI2Application, ASGI3Application]
__deprecated__ = {
"WebsocketConnectEvent": WebSocketConnectEvent,
"WebsocketAcceptEvent": WebSocketAcceptEvent,
"WebsocketReceiveEvent": WebSocketReceiveEvent,
"WebsocketSendEvent": WebSocketSendEvent,
"WebsocketResponseStartEvent": WebSocketResponseStartEvent,
"WebsocketResponseBodyEvent": WebSocketResponseBodyEvent,
"WebsocketDisconnectEvent": WebSocketDisconnectEvent,
"WebsocketCloseEvent": WebSocketCloseEvent,
}
def __getattr__(name: str) -> Any:
deprecated = __deprecated__.get(name)
if deprecated:
stacklevel = 3 if sys.version_info >= (3, 7) else 4
warnings.warn(
f"'{name}' is deprecated. Use '{deprecated.__name__}' instead.",
category=DeprecationWarning,
stacklevel=stacklevel,
)
return deprecated
raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
def __dir__() -> List[str]:
return sorted(list(__all__) + list(__deprecated__.keys()))
pep562(__name__)
from io import BytesIO
from tempfile import SpooledTemporaryFile
from asgiref.sync import AsyncToSync, sync_to_async
class WsgiToAsgi:
"""
Wraps a WSGI application to make it into an ASGI application.
"""
def __init__(self, wsgi_application):
self.wsgi_application = wsgi_application
async def __call__(self, scope, receive, send):
"""
ASGI application instantiation point.
We return a new WsgiToAsgiInstance here with the WSGI app
and the scope, ready to respond when it is __call__ed.
"""
await WsgiToAsgiInstance(self.wsgi_application)(scope, receive, send)
class WsgiToAsgiInstance:
"""
Per-socket instance of a wrapped WSGI application
"""
def __init__(self, wsgi_application):
self.wsgi_application = wsgi_application
self.response_started = False
self.response_content_length = None
async def __call__(self, scope, receive, send):
if scope["type"] != "http":
raise ValueError("WSGI wrapper received a non-HTTP scope")
self.scope = scope
with SpooledTemporaryFile(max_size=65536) as body:
# Alright, wait for the http.request messages
while True:
message = await receive()
if message["type"] != "http.request":
raise ValueError("WSGI wrapper received a non-HTTP-request message")
body.write(message.get("body", b""))
if not message.get("more_body"):
break
body.seek(0)
# Wrap send so it can be called from the subthread
self.sync_send = AsyncToSync(send)
# Call the WSGI app
await self.run_wsgi_app(body)
def build_environ(self, scope, body):
"""
Builds a scope and request body into a WSGI environ object.
"""
environ = {
"REQUEST_METHOD": scope["method"],
"SCRIPT_NAME": scope.get("root_path", "").encode("utf8").decode("latin1"),
"PATH_INFO": scope["path"].encode("utf8").decode("latin1"),
"QUERY_STRING": scope["query_string"].decode("ascii"),
"SERVER_PROTOCOL": "HTTP/%s" % scope["http_version"],
"wsgi.version": (1, 0),
"wsgi.url_scheme": scope.get("scheme", "http"),
"wsgi.input": body,
"wsgi.errors": BytesIO(),
"wsgi.multithread": True,
"wsgi.multiprocess": True,
"wsgi.run_once": False,
}
# Get server name and port - required in WSGI, not in ASGI
if "server" in scope:
environ["SERVER_NAME"] = scope["server"][0]
environ["SERVER_PORT"] = str(scope["server"][1])
else:
environ["SERVER_NAME"] = "localhost"
environ["SERVER_PORT"] = "80"
if "client" in scope:
environ["REMOTE_ADDR"] = scope["client"][0]
# Go through headers and make them into environ entries
for name, value in self.scope.get("headers", []):
name = name.decode("latin1")
if name == "content-length":
corrected_name = "CONTENT_LENGTH"
elif name == "content-type":
corrected_name = "CONTENT_TYPE"
else:
corrected_name = "HTTP_%s" % name.upper().replace("-", "_")
# HTTPbis say only ASCII chars are allowed in headers, but we latin1 just in case
value = value.decode("latin1")
if corrected_name in environ:
value = environ[corrected_name] + "," + value
environ[corrected_name] = value
return environ
def start_response(self, status, response_headers, exc_info=None):
"""
WSGI start_response callable.
"""
# Don't allow re-calling once response has begun
if self.response_started:
raise exc_info[1].with_traceback(exc_info[2])
# Don't allow re-calling without exc_info
if hasattr(self, "response_start") and exc_info is None:
raise ValueError(
"You cannot call start_response a second time without exc_info"
)
# Extract status code
status_code, _ = status.split(" ", 1)
status_code = int(status_code)
# Extract headers
headers = [
(name.lower().encode("ascii"), value.encode("ascii"))
for name, value in response_headers
]
# Extract content-length
self.response_content_length = None
for name, value in response_headers:
if name.lower() == "content-length":
self.response_content_length = int(value)
# Build and send response start message.
self.response_start = {
"type": "http.response.start",
"status": status_code,
"headers": headers,
}
@sync_to_async
def run_wsgi_app(self, body):
"""
Called in a subthread to run the WSGI app. We encapsulate like
this so that the start_response callable is called in the same thread.
"""
# Translate the scope and incoming request body into a WSGI environ
environ = self.build_environ(self.scope, body)
# Run the WSGI app
bytes_sent = 0
for output in self.wsgi_application(environ, self.start_response):
# If this is the first response, include the response headers
if not self.response_started:
self.response_started = True
self.sync_send(self.response_start)
# If the application supplies a Content-Length header
if self.response_content_length is not None:
# The server should not transmit more bytes to the client than the header allows
bytes_allowed = self.response_content_length - bytes_sent
if len(output) > bytes_allowed:
output = output[:bytes_allowed]
self.sync_send(
{"type": "http.response.body", "body": output, "more_body": True}
)
bytes_sent += len(output)
# The server should stop iterating over the response when enough data has been sent
if bytes_sent == self.response_content_length:
break
# Close connection
if not self.response_started:
self.response_started = True
self.sync_send(self.response_start)
self.sync_send({"type": "http.response.body"})
NOTE: Portions of this project's code are copyrighted by
The University of Texas at Austin
while other portions are copyrighted by
Hewlett Packard Enterprise Development LP
Advanced Micro Devices, Inc.
ExplosionAI GmbH
with some overlap. Please see file-level license headers for file-specific
copyright info. All parties provide their portions of the code under the
3-clause BSD license, found below.
---
Copyright (C) 2018, The University of Texas at Austin
Copyright (C) 2016, Hewlett Packard Enterprise Development LP
Copyright (C) 2018, Advanced Micro Devices, Inc.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
- Neither the name(s) of the copyright holder(s) nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Metadata-Version: 2.1
Name: blis
Version: 0.7.5
Summary: The Blis BLAS-like linear algebra library, as a self-contained C-extension.
Home-page: https://github.com/explosion/cython-blis
Author: Matthew Honnibal
Author-email: matt@explosion.ai
License: BSD
Platform: UNKNOWN
Classifier: Development Status :: 4 - Beta
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Information Technology
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: POSIX :: Linux
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Programming Language :: Cython
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Topic :: Scientific/Engineering
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy (>=1.15.0)
<a href="https://explosion.ai"><img src="https://explosion.ai/assets/img/logo.svg" width="125" height="125" align="right" /></a>
# Cython BLIS: Fast BLAS-like operations from Python and Cython, without the tears
This repository provides the [Blis linear algebra](https://github.com/flame/blis)
routines as a self-contained Python C-extension.
Currently, we only supports single-threaded execution, as this is actually best for our workloads (ML inference).
[![Azure Pipelines](https://img.shields.io/azure-devops/build/explosion-ai/public/6/master.svg?logo=azure-pipelines&style=flat-square)](https://dev.azure.com/explosion-ai/public/_build?definitionId=6)
[![pypi Version](https://img.shields.io/pypi/v/blis.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.python.org/pypi/blis)
[![conda](https://img.shields.io/conda/vn/conda-forge/cython-blis.svg?style=flat-square&logo=conda-forge&logoColor=white)](https://anaconda.org/conda-forge/cython-blis)
[![Python wheels](https://img.shields.io/badge/wheels-%E2%9C%93-4c1.svg?longCache=true&style=flat-square&logo=python&logoColor=white)](https://github.com/explosion/wheelwright/releases)
## Installation
You can install the package via pip, first making sure that `pip`, `setuptools`,
and `wheel` are up-to-date:
```bash
pip install -U pip setuptools wheel
pip install blis
```
Wheels should be available, so installation should be fast. If you want to install from source and you're on Windows, you'll need to install LLVM.
### Building BLIS for alternative architectures
The provided wheels should work on x86_64 architectures. Unfortunately we do not currently know a way to provide different wheels for alternative architectures, and we cannot provide a single binary that works everywhere. So if the wheel doesn't work for your CPU, you'll need to specify source distribution, and tell Blis your CPU architecture using the `BLIS_ARCH` environment variable.
#### a) Installing with generic arch support
```bash
BLIS_ARCH="generic" pip install spacy --no-binary blis
```
#### b) Building specific support
In order to compile Blis, `cython-blis` bundles makefile scripts for specific architectures, that are compiled by running the Blis build system and logging the commands. We do not yet have logs for every architecture, as there are some architectures we have not had access to.
[See here](https://github.com/flame/blis/blob/0.5.1/config_registry) for list of
architectures. For example, here's how to build support for the ARM architecture `cortexa57`:
```bash
git clone https://github.com/explosion/cython-blis && cd cython-blis
git pull && git submodule init && git submodule update && git submodule status
python3 -m venv env3.6
source env3.6/bin/activate
pip install -r requirements.txt
./bin/generate-make-jsonl linux cortexa57
BLIS_ARCH="cortexa57" python setup.py build_ext --inplace
BLIS_ARCH="cortexa57" python setup.py bdist_wheel
```
Fingers crossed, this will build you a wheel that supports your platform. You
could then [submit a PR](https://github.com/explosion/cython-blis/pulls) with
the `blis/_src/make/linux-cortexa57.jsonl` and
`blis/_src/include/linux-cortexa57/blis.h` files so that you can run:
```bash
BLIS_ARCH=cortexa57 pip install --no-binary=blis
```
## Usage
Two APIs are provided: a high-level Python API, and direct
[Cython](http://cython.org) access, which provides fused-type, nogil
Cython bindings to the underlying Blis linear algebra library. Fused
types are a simple template mechanism, allowing just a touch of
compile-time generic programming:
```python
cimport blis.cy
A = <float*>calloc(nN * nI, sizeof(float))
B = <float*>calloc(nO * nI, sizeof(float))
C = <float*>calloc(nr_b0 * nr_b1, sizeof(float))
blis.cy.gemm(blis.cy.NO_TRANSPOSE, blis.cy.NO_TRANSPOSE,
nO, nI, nN,
1.0, A, nI, 1, B, nO, 1,
1.0, C, nO, 1)
```
Bindings have been added as we've needed them. Please submit pull requests if
the library is missing some functions you require.
## Development
To build the source package, you should run the following command:
```bash
./bin/update-vendored-source
```
This populates the `blis/_src` folder for the various architectures, using the
`flame-blis` submodule.
## Updating the build files
In order to compile the Blis sources, we use jsonl files that provide the
explicit compiler flags. We build these jsonl files by running Blis's build
system, and then converting the log. This avoids us having to replicate the
build system within Python: we just use the jsonl to make a bunch of subprocess
calls. To support a new OS/architecture combination, we have to provide the
jsonl file and the header.
### Linux
The Linux build files need to be produced from within the manylinux1 docker
container, so that they will be compatible with the wheel building process.
First, install docker. Then do the following to start the container:
sudo docker run -it quay.io/pypa/manylinux1_x86_64:latest
Once within the container, the following commands should check out the repo and
build the jsonl files for the generic arch:
mkdir /usr/local/repos
cd /usr/local/repos
git clone https://github.com/explosion/cython-blis && cd cython-blis
git pull && git submodule init && git submodule update && git submodule
status
/opt/python/cp36-cp36m/bin/python -m venv env3.6
source env3.6/bin/activate
pip install -r requirements.txt
./bin/generate-make-jsonl linux generic --export
BLIS_ARCH=generic python setup.py build_ext --inplace
# N.B.: don't copy to /tmp, docker cp doesn't work from there.
cp blis/_src/include/linux-generic/blis.h /linux-generic-blis.h
cp blis/_src/make/linux-generic.jsonl /
Then from a new terminal, retrieve the two files we need out of the container:
sudo docker ps -l # Get the container ID
# When I'm in Vagrant, I need to go via cat -- but then I end up with dummy
# lines at the top and bottom. Sigh. If you don't have that problem and
# sudo docker cp just works, just copy the file.
sudo docker cp aa9d42588791:/linux-generic-blis.h - | cat > linux-generic-blis.h
sudo docker cp aa9d42588791:/linux-generic.jsonl - | cat > linux-generic.jsonl
blis-0.7.5.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
blis-0.7.5.dist-info/LICENSE,sha256=GoFvQzlw5PMnaNMONhi_MYR1Kq18h-8QeW1xDk-GU7Y,2077
blis-0.7.5.dist-info/METADATA,sha256=7dhwZPUYSxcrmZwC7YuCZR3vrCo0Th-1VK5227YYW8M,7365
blis-0.7.5.dist-info/RECORD,,
blis-0.7.5.dist-info/WHEEL,sha256=P24hRDQapckAvHvgBBzlH5uw6rQ_oZQtKbG6drGrqOM,100
blis-0.7.5.dist-info/top_level.txt,sha256=yqZIBDPqq9RefHu0PWPWmOQfhhKsBy120eoLTMlmXkA,5
blis/__init__.pxd,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
blis/__init__.py,sha256=bV8lTy__5O9CrmYGsSuV6B859iW8ygR884ty09UIwHo,84
blis/__pycache__/__init__.cpython-38.pyc,,
blis/__pycache__/about.cpython-38.pyc,,
blis/__pycache__/benchmark.cpython-38.pyc,,
blis/about.py,sha256=ngZrcPsdqjJNIG7RBfPPeT1dlAR3m_HL6cPIrP-vcBo,558
blis/benchmark.py,sha256=3XTpfeu37Hku82u2XoIV_DgbBZVvLVnA5yyVbd2cDAk,2766
blis/cy.cp38-win_amd64.exp,sha256=0fRhpIw-oQtQAwnSgX-S2k8yk5t4fSdkKn9510kFzUE,716
blis/cy.cp38-win_amd64.lib,sha256=Kxy-1cE3yG1kmfe9qSvSIvRPhGSmMxg2AzlPUZMB-DU,1908
blis/cy.cp38-win_amd64.pyd,sha256=LWKhiL7HyQc2x1f4TviBUuZaYwbT7sxtPPW6-B56D5o,11176960
blis/cy.pxd,sha256=Zq4Hz8OzU2Cyf1-lO5H35iRi8jw8v-FQkfuyBvw1G_U,3175
blis/cy.pyx,sha256=c5IHjoziCzYikr0qJouQBuVOq0fet1zESP9YqN7aZJc,16409
blis/py.cp38-win_amd64.exp,sha256=n-q_ybHXiWoTrcp-6cj19TQwCPeyq8GyOZbFBFNTxVQ,716
blis/py.cp38-win_amd64.lib,sha256=acUjEDKRjV5HYxFuiyYWr-6rWfnSunHG_FM3PulmYjA,1908
blis/py.cp38-win_amd64.pyd,sha256=USLzDASWiEYkxnmZq4gJb5mIOEAuy3n35M_uIV1oNWY,11356160
blis/py.pyx,sha256=pyE3BZkS2WsuPsDy56QfD0DqfcvnlnAZtmON_hIUH2o,6934
blis/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
blis/tests/__pycache__/__init__.cpython-38.pyc,,
blis/tests/__pycache__/common.cpython-38.pyc,,
blis/tests/__pycache__/test_dotv.cpython-38.pyc,,
blis/tests/__pycache__/test_gemm.cpython-38.pyc,,
blis/tests/common.py,sha256=V6G1fzZpfIZl-ICmk7Yc0OxFyE37I0_AuCUnC7lU6p8,2662
blis/tests/test_dotv.py,sha256=5xcvRk0zRxu1Psqceg0fzTktDS7giG8q3zvquwpRals,1172
blis/tests/test_gemm.py,sha256=TKd6XpywJGdEhuGZXRd7ckJMDTH5wUtKXccbF6tfius,2568
Wheel-Version: 1.0
Generator: bdist_wheel (0.37.0)
Root-Is-Purelib: false
Tag: cp38-cp38-win_amd64
# Copyright ExplsionAI GmbH, released under BSD.
from .cy import init
init()
# Copyright ExplosionAI GmbH, released under BSD
# inspired from:
# https://python-packaging-user-guide.readthedocs.org/en/latest/single_source_version/
# https://github.com/pypa/warehouse/blob/master/warehouse/__about__.py
__name__ = "blis"
__version__ = "0.7.5"
__summary__ = (
"The Blis BLAS-like linear algebra library, as a self-contained C-extension."
)
__uri__ = "https://github.com/explosion/cython-blis"
__author__ = "Matthew Honnibal"
__email__ = "matt@explosion.ai"
__license__ = "BSD"
__title__ = "blis"
__release__ = True
# Copyright ExplsionAI GmbH, released under BSD.
import numpy
import numpy.random
from .py import gemm, einsum
from timeit import default_timer as timer
numpy.random.seed(0)
def create_data(nO, nI, batch_size):
X = numpy.zeros((batch_size, nI), dtype="f")
X += numpy.random.uniform(-1.0, 1.0, X.shape)
W = numpy.zeros((nO, nI), dtype="f")
W += numpy.random.uniform(-1.0, 1.0, W.shape)
return X, W
def get_numpy_blas():
blas_libs = numpy.__config__.blas_opt_info["libraries"]
return blas_libs[0]
def numpy_gemm(X, W, n=1000):
nO, nI = W.shape
batch_size = X.shape[0]
total = 0.0
y = numpy.zeros((batch_size, nO), dtype="f")
for i in range(n):
numpy.dot(X, W, out=y)
total += y.sum()
y.fill(0)
print("Total:", total)
def blis_gemm(X, W, n=1000):
nO, nI = W.shape
batch_size = X.shape[0]
total = 0.0
y = numpy.zeros((batch_size, nO), dtype="f")
for i in range(n):
gemm(X, W, out=y)
total += y.sum()
y.fill(0.0)
print("Total:", total)
def numpy_einsum(X, W, n=1000):
nO, nI = W.shape
batch_size = X.shape[0]
total = 0.0
y = numpy.zeros((nO, batch_size), dtype="f")
for i in range(n):
numpy.einsum("ab,cb->ca", X, W, out=y)
total += y.sum()
y.fill(0.0)
print("Total:", total)
def blis_einsum(X, W, n=1000):
nO, nI = W.shape
batch_size = X.shape[0]
total = 0.0
y = numpy.zeros((nO, batch_size), dtype="f")
for i in range(n):
einsum("ab,cb->ca", X, W, out=y)
total += y.sum()
y.fill(0.0)
print("Total:", total)
def main(nI=128 * 3, nO=128 * 3, batch_size=2000):
print(
"Setting up data for gemm. 1000 iters, "
"nO={nO} nI={nI} batch_size={batch_size}".format(**locals())
)
numpy_blas = get_numpy_blas()
X1, W1 = create_data(nI, nO, batch_size)
X2 = X1.copy()
W2 = W1.copy()
print("Blis gemm...")
start = timer()
blis_gemm(X2, W2, n=1000)
end = timer()
blis_time = end - start
print("%.2f seconds" % blis_time)
print("Numpy (%s) gemm..." % numpy_blas)
start = timer()
numpy_gemm(X1, W1)
end = timer()
numpy_time = end - start
print("%.2f seconds" % numpy_time)
print("Blis einsum ab,cb->ca")
start = timer()
blis_einsum(X2, W2, n=1000)
end = timer()
blis_time = end - start
print("%.2f seconds" % blis_time)
print("Numpy (%s) einsum ab,cb->ca" % numpy_blas)
start = timer()
numpy_einsum(X2, W2)
end = timer()
numpy_time = end - start
print("%.2f seconds" % numpy_time)
if __name__:
main()
# Copyright ExplsionAI GmbH, released under BSD.
from cython cimport view
from libc.stdint cimport int64_t
ctypedef float[::1] float1d_t
ctypedef double[::1] double1d_t
ctypedef float[:, ::1] float2d_t
ctypedef double[:, ::1] double2d_t
ctypedef float* floats_t
ctypedef double* doubles_t
ctypedef const float[::1] const_float1d_t
ctypedef const double[::1] const_double1d_t
ctypedef const float[:, ::1] const_float2d_t
ctypedef const double[:, ::1] const_double2d_t
ctypedef const float* const_floats_t
ctypedef const double* const_doubles_t
cdef fused reals_ft:
floats_t
doubles_t
float1d_t
double1d_t
cdef fused const_reals_ft:
const_floats_t
const_doubles_t
const_float1d_t
const_double1d_t
cdef fused reals1d_ft:
float1d_t
double1d_t
cdef fused const_reals1d_ft:
const_float1d_t
const_double1d_t
cdef fused reals2d_ft:
float2d_t
double2d_t
cdef fused const_reals2d_ft:
const_float2d_t
const_double2d_t
cdef fused real_ft:
float
double
ctypedef int64_t dim_t
ctypedef int64_t inc_t
ctypedef int64_t doff_t
# Sucks to set these from magic numbers, but it's better than dragging
# the header into our header.
# We get some piece of mind from checking the values on init.
cpdef enum trans_t:
NO_TRANSPOSE = 0
TRANSPOSE = 8
CONJ_NO_TRANSPOSE = 16
CONJ_TRANSPOSE = 24
cpdef enum conj_t:
NO_CONJUGATE = 0
CONJUGATE = 16
cpdef enum side_t:
LEFT = 0
RIGHT = 1
cpdef enum uplo_t:
LOWER = 192
UPPER = 96
DENSE = 224
cpdef enum diag_t:
NONUNIT_DIAG = 0
UNIT_DIAG = 256
cdef void gemm(
trans_t transa,
trans_t transb,
dim_t m,
dim_t n,
dim_t k,
double alpha,
reals_ft a, inc_t rsa, inc_t csa,
reals_ft b, inc_t rsb, inc_t csb,
double beta,
reals_ft c, inc_t rsc, inc_t csc,
) nogil
cdef void ger(
conj_t conjx,
conj_t conjy,
dim_t m,
dim_t n,
double alpha,
reals_ft x, inc_t incx,
reals_ft y, inc_t incy,
reals_ft a, inc_t rsa, inc_t csa
) nogil
cdef void gemv(
trans_t transa,
conj_t conjx,
dim_t m,
dim_t n,
real_ft alpha,
reals_ft a, inc_t rsa, inc_t csa,
reals_ft x, inc_t incx,
real_ft beta,
reals_ft y, inc_t incy
) nogil
cdef void axpyv(
conj_t conjx,
dim_t m,
real_ft alpha,
reals_ft x, inc_t incx,
reals_ft y, inc_t incy
) nogil
cdef void scalv(
conj_t conjalpha,
dim_t m,
real_ft alpha,
reals_ft x, inc_t incx
) nogil
cdef double dotv(
conj_t conjx,
conj_t conjy,
dim_t m,
reals_ft x,
reals_ft y,
inc_t incx,
inc_t incy,
) nogil
cdef double norm_L1(
dim_t n,
reals_ft x, inc_t incx
) nogil
cdef double norm_L2(
dim_t n,
reals_ft x, inc_t incx
) nogil
cdef double norm_inf(
dim_t n,
reals_ft x, inc_t incx
) nogil
cdef void randv(
dim_t m,
reals_ft x, inc_t incx
) nogil
This diff is collapsed.
# cython: boundscheck=False
# Copyright ExplsionAI GmbH, released under BSD.
cimport numpy as np
from . cimport cy
from .cy cimport reals1d_ft, reals2d_ft, float1d_t, float2d_t
from .cy cimport const_reals1d_ft, const_reals2d_ft, const_float1d_t, const_float2d_t
from .cy cimport const_double1d_t, const_double2d_t
import numpy
def axpy(const_reals1d_ft A, double scale=1., np.ndarray out=None):
if const_reals1d_ft is const_float1d_t:
if out is None:
out = numpy.zeros((A.shape[0],), dtype='f')
B = <float*>out.data
return out
elif const_reals1d_ft is const_double1d_t:
if out is None:
out = numpy.zeros((A.shape[0],), dtype='d')
B = <double*>out.data
with nogil:
cy.axpyv(cy.NO_CONJUGATE, A.shape[0], scale, &A[0], 1, B, 1)
return out
else:
B = NULL
raise TypeError("Unhandled fused type")
def batch_axpy(reals2d_ft A, reals1d_ft B, np.ndarray out=None):
pass
def ger(const_reals2d_ft A, const_reals1d_ft B, double scale=1., np.ndarray out=None):
if const_reals2d_ft is const_float2d_t and const_reals1d_ft is const_float1d_t:
if out is None:
out = numpy.zeros((A.shape[0], B.shape[0]), dtype='f')
with nogil:
cy.ger(
cy.NO_CONJUGATE, cy.NO_CONJUGATE,
A.shape[0], B.shape[0],
scale,
&A[0,0], 1,
&B[0], 1,
<float*>out.data, out.shape[1], 1)
return out
elif const_reals2d_ft is const_double2d_t and const_reals1d_ft is const_double1d_t:
if out is None:
out = numpy.zeros((A.shape[0], B.shape[0]), dtype='d')
with nogil:
cy.ger(
cy.NO_CONJUGATE, cy.NO_CONJUGATE,
A.shape[0], B.shape[0],
scale,
&A[0,0], 1,
&B[0], 1,
<double*>out.data, out.shape[1], 1)
return out
else:
C = NULL
raise TypeError("Unhandled fused type")
def gemm(const_reals2d_ft A, const_reals2d_ft B,
np.ndarray out=None, bint trans1=False, bint trans2=False,
double alpha=1., double beta=1.):
cdef cy.dim_t nM = A.shape[0] if not trans1 else A.shape[1]
cdef cy.dim_t nK = A.shape[1] if not trans1 else A.shape[0]
cdef cy.dim_t nN = B.shape[1] if not trans2 else B.shape[0]
if const_reals2d_ft is const_float2d_t:
if out is None:
out = numpy.zeros((nM, nN), dtype='f')
C = <float*>out.data
with nogil:
cy.gemm(
cy.TRANSPOSE if trans1 else cy.NO_TRANSPOSE,
cy.TRANSPOSE if trans2 else cy.NO_TRANSPOSE,
nM, nN, nK,
alpha,
&A[0,0], A.shape[1], 1,
&B[0,0], B.shape[1], 1,
beta,
C, out.shape[1], 1)
return out
elif const_reals2d_ft is const_double2d_t:
if out is None:
out = numpy.zeros((A.shape[0], B.shape[1]), dtype='d')
C = <double*>out.data
with nogil:
cy.gemm(
cy.TRANSPOSE if trans1 else cy.NO_TRANSPOSE,
cy.TRANSPOSE if trans2 else cy.NO_TRANSPOSE,
A.shape[0], B.shape[1], A.shape[1],
alpha,
&A[0,0], A.shape[1], 1,
&B[0,0], B.shape[1], 1,
beta,
C, out.shape[1], 1)
return out
else:
C = NULL
raise TypeError("Unhandled fused type")
def gemv(const_reals2d_ft A, const_reals1d_ft B,
bint trans1=False, double alpha=1., double beta=1.,
np.ndarray out=None):
if const_reals1d_ft is const_float1d_t and const_reals2d_ft is const_float2d_t:
if out is None:
out = numpy.zeros((A.shape[0],), dtype='f')
with nogil:
cy.gemv(
cy.TRANSPOSE if trans1 else cy.NO_TRANSPOSE,
cy.NO_CONJUGATE,
A.shape[0], A.shape[1],
alpha,
&A[0,0], A.shape[1], 1,
&B[0], 1,
beta,
<float*>out.data, 1)
return out
elif const_reals1d_ft is const_double1d_t and const_reals2d_ft is const_double2d_t:
if out is None:
out = numpy.zeros((A.shape[0],), dtype='d')
with nogil:
cy.gemv(
cy.TRANSPOSE if trans1 else cy.NO_TRANSPOSE,
cy.NO_CONJUGATE,
A.shape[0], A.shape[1],
alpha,
&A[0,0], A.shape[1], 1,
&B[0], 1,
beta,
<double*>out.data, 1)
return out
else:
raise TypeError("Unhandled fused type")
def dotv(const_reals1d_ft X, const_reals1d_ft Y, bint conjX=False, bint conjY=False):
if X.shape[0] != Y.shape[0]:
msg = "Shape mismatch for blis.dotv: (%d,), (%d,)"
raise ValueError(msg % (X.shape[0], Y.shape[0]))
return cy.dotv(
cy.CONJUGATE if conjX else cy.NO_CONJUGATE,
cy.CONJUGATE if conjY else cy.NO_CONJUGATE,
X.shape[0], &X[0], &Y[0], 1, 1
)
def einsum(todo, A, B, out=None):
if todo == 'a,a->a':
return axpy(A, B, out=out)
elif todo == 'a,b->ab':
return ger(A, B, out=out)
elif todo == 'a,b->ba':
return ger(B, A, out=out)
elif todo == 'ab,a->ab':
return batch_axpy(A, B, out=out)
elif todo == 'ab,a->ba':
return batch_axpy(A, B, trans1=True, out=out)
elif todo == 'ab,b->a':
return gemv(A, B, out=out)
elif todo == 'ab,a->b':
return gemv(A, B, trans1=True, out=out)
# The rule here is, look at the first dimension of the output. That must
# occur in arg1. Set trans1 if it's dimension 2.
# E.g. bc is output, b occurs in ab, so that must be arg1. So we need
# trans1=True, to make ba,ac->bc
elif todo == 'ab,ac->bc':
return gemm(A, B, trans1=True, trans2=False, out=out)
elif todo == 'ab,ac->cb':
return gemm(B, A, out=out, trans1=True, trans2=True)
elif todo == 'ab,bc->ac':
return gemm(A, B, out=out, trans1=False, trans2=False)
elif todo == 'ab,bc->ca':
return gemm(B, A, out=out, trans1=True, trans2=True)
elif todo == 'ab,ca->bc':
return gemm(A, B, out=out, trans1=True, trans2=True)
elif todo == 'ab,ca->cb':
return gemm(B, A, out=out, trans1=False, trans2=False)
elif todo == 'ab,cb->ac':
return gemm(A, B, out=out, trans1=False, trans2=True)
elif todo == 'ab,cb->ca':
return gemm(B, A, out=out, trans1=False, trans2=True)
else:
raise ValueError("Invalid einsum: %s" % todo)
# Copyright ExplsionAI GmbH, released under BSD.
from __future__ import print_function
import numpy as np
np.random.seed(0)
from numpy.testing import assert_allclose
from hypothesis import assume
from hypothesis.strategies import tuples, integers, floats
from hypothesis.extra.numpy import arrays
def lengths(lo=1, hi=10):
return integers(min_value=lo, max_value=hi)
def shapes(min_rows=1, max_rows=100, min_cols=1, max_cols=100):
return tuples(lengths(lo=min_rows, hi=max_rows), lengths(lo=min_cols, hi=max_cols))
def ndarrays_of_shape(shape, lo=-1000.0, hi=1000.0, dtype="float64"):
width = 64 if dtype == "float64" else 32
return arrays(
dtype, shape=shape, elements=floats(min_value=lo, max_value=hi, width=width)
)
def ndarrays(
min_len=0, max_len=10, min_val=-10000000.0, max_val=1000000.0, dtype="float64"
):
return lengths(lo=min_len, hi=max_len).flatmap(
lambda n: ndarrays_of_shape(n, lo=min_val, hi=max_val, dtype=dtype)
)
def matrices(
min_rows=1,
max_rows=10,
min_cols=1,
max_cols=10,
min_value=-10000000.0,
max_value=1000000.0,
dtype="float64",
):
return shapes(
min_rows=min_rows, max_rows=max_rows, min_cols=min_cols, max_cols=max_cols
).flatmap(lambda mn: ndarrays_of_shape(mn, lo=min_value, hi=max_value, dtype=dtype))
def positive_ndarrays(min_len=0, max_len=10, max_val=100000.0, dtype="float64"):
return ndarrays(
min_len=min_len, max_len=max_len, min_val=0, max_val=max_val, dtype=dtype
)
def negative_ndarrays(min_len=0, max_len=10, min_val=-100000.0, dtype="float64"):
return ndarrays(
min_len=min_len, max_len=max_len, min_val=min_val, max_val=-1e-10, dtype=dtype
)
def parse_layer(layer_data):
# Get the first row, excluding the first column
x = layer_data[0, 1:]
# Get the first column, excluding the first row
# .ascontiguousarray is support important here!!!!
b = np.ascontiguousarray(layer_data[1:, 0], dtype="float64")
# Slice out the row and the column used for the X and the bias
W = layer_data[1:, 1:]
assert x.ndim == 1
assert b.ndim == 1
assert b.shape[0] == W.shape[0]
assert x.shape[0] == W.shape[1]
assume(not np.isnan(W.sum()))
assume(not np.isnan(x.sum()))
assume(not np.isnan(b.sum()))
assume(not any(np.isinf(val) for val in W.flatten()))
assume(not any(np.isinf(val) for val in x))
assume(not any(np.isinf(val) for val in b))
return x, b, W
def split_row(layer_data):
return (layer_data[0, :], layer_data[:, :])
# Copyright ExplosionAI GmbH, released under BSD.
from __future__ import division
from hypothesis import given, assume
from blis.tests.common import *
from blis.py import dotv
@given(
ndarrays(min_len=10, max_len=100, min_val=-100.0, max_val=100.0, dtype="float64"),
ndarrays(min_len=10, max_len=100, min_val=-100.0, max_val=100.0, dtype="float64"),
)
def test_memoryview_double_noconj(A, B):
if len(A) < len(B):
B = B[: len(A)]
else:
A = A[: len(B)]
assume(A is not None)
assume(B is not None)
numpy_result = A.dot(B)
result = dotv(A, B)
assert_allclose([numpy_result], result, atol=1e-4, rtol=1e-4)
@given(
ndarrays(min_len=10, max_len=100, min_val=-100.0, max_val=100.0, dtype="float32"),
ndarrays(min_len=10, max_len=100, min_val=-100.0, max_val=100.0, dtype="float32"),
)
def test_memoryview_float_noconj(A, B):
if len(A) < len(B):
B = B[: len(A)]
else:
A = A[: len(B)]
assume(A is not None)
assume(B is not None)
numpy_result = A.dot(B)
result = dotv(A, B)
assert_allclose([numpy_result], result, atol=1e-4, rtol=1e-3)
# Copyright ExplosionAI GmbH, released under BSD.
from __future__ import division
from hypothesis import given, assume
from math import sqrt, floor
from blis.tests.common import *
from blis.py import gemm
def _stretch_matrix(data, m, n):
orig_len = len(data)
orig_m = m
orig_n = n
ratio = sqrt(len(data) / (m * n))
m = int(floor(m * ratio))
n = int(floor(n * ratio))
data = np.ascontiguousarray(data[: m * n], dtype=data.dtype)
return data.reshape((m, n)), m, n
def _reshape_for_gemm(
A, B, a_rows, a_cols, out_cols, dtype, trans_a=False, trans_b=False
):
A, a_rows, a_cols = _stretch_matrix(A, a_rows, a_cols)
if len(B) < a_cols or a_cols < 1:
return (None, None, None)
b_cols = int(floor(len(B) / a_cols))
B = np.ascontiguousarray(B.flatten()[: a_cols * b_cols], dtype=dtype)
B = B.reshape((a_cols, b_cols))
out_cols = B.shape[1]
C = np.zeros(shape=(A.shape[0], B.shape[1]), dtype=dtype)
if trans_a:
A = np.ascontiguousarray(A.T, dtype=dtype)
return A, B, C
@given(
ndarrays(min_len=10, max_len=100, min_val=-100.0, max_val=100.0, dtype="float64"),
ndarrays(min_len=10, max_len=100, min_val=-100.0, max_val=100.0, dtype="float64"),
integers(min_value=2, max_value=1000),
integers(min_value=2, max_value=1000),
integers(min_value=2, max_value=1000),
)
def test_memoryview_double_notrans(A, B, a_rows, a_cols, out_cols):
A, B, C = _reshape_for_gemm(A, B, a_rows, a_cols, out_cols, "float64")
assume(A is not None)
assume(B is not None)
assume(C is not None)
assume(A.size >= 1)
assume(B.size >= 1)
assume(C.size >= 1)
gemm(A, B, out=C)
numpy_result = A.dot(B)
assert_allclose(numpy_result, C, atol=1e-4, rtol=1e-4)
@given(
ndarrays(min_len=10, max_len=100, min_val=-100.0, max_val=100.0, dtype="float32"),
ndarrays(min_len=10, max_len=100, min_val=-100.0, max_val=100.0, dtype="float32"),
integers(min_value=2, max_value=1000),
integers(min_value=2, max_value=1000),
integers(min_value=2, max_value=1000),
)
def test_memoryview_float_notrans(A, B, a_rows, a_cols, out_cols):
A, B, C = _reshape_for_gemm(A, B, a_rows, a_cols, out_cols, dtype="float32")
assume(A is not None)
assume(B is not None)
assume(C is not None)
assume(A.size >= 1)
assume(B.size >= 1)
assume(C.size >= 1)
gemm(A, B, out=C)
numpy_result = A.dot(B)
assert_allclose(numpy_result, C, atol=1e-3, rtol=1e-3)
MIT License
Copyright (c) 2019 ExplosionAI GmbH
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
This diff is collapsed.
catalogue-2.0.6.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
catalogue-2.0.6.dist-info/LICENSE,sha256=9_dOqC-aENfOK6Bd1-RgyMK7QqbMfZrnGrNWYbX3HJs,1073
catalogue-2.0.6.dist-info/METADATA,sha256=bzGO_GHn8nrum6I2QVNvg9XtyKIeBpUPEJ6aUaj0CVo,14027
catalogue-2.0.6.dist-info/RECORD,,
catalogue-2.0.6.dist-info/WHEEL,sha256=ewwEueio1C2XeHTvT17n8dZUJgOvyCWCt0WVNLClP9o,92
catalogue-2.0.6.dist-info/top_level.txt,sha256=v0FzwHJY8kky7f3duDJVqj0z3dCbBte3Cx1xLPA_qn4,10
catalogue-2.0.6.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1
catalogue/__init__.py,sha256=4Ua_xt5wcJIb1hbdjg5Gumg00e6LLIPKBE9eYJqS02Q,8348
catalogue/__pycache__/__init__.cpython-38.pyc,,
catalogue/_importlib_metadata/__init__.py,sha256=Zra8g0XvENOMxaUMRd84jj4UBzuRQUnB6yUxNIBzAJE,20204
catalogue/_importlib_metadata/__pycache__/__init__.cpython-38.pyc,,
catalogue/_importlib_metadata/__pycache__/_compat.cpython-38.pyc,,
catalogue/_importlib_metadata/_compat.py,sha256=VyumYLwVNW39FXqVV4XKU2yR5eRpHycS7A6pDxFVExY,2406
catalogue/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
catalogue/tests/__pycache__/__init__.cpython-38.pyc,,
catalogue/tests/__pycache__/test_catalogue.cpython-38.pyc,,
catalogue/tests/test_catalogue.py,sha256=_nDAx8iiymaXnJf_e83NCwqF1UPSxoVbamwYbUOuVK8,4328
Wheel-Version: 1.0
Generator: bdist_wheel (0.37.0)
Root-Is-Purelib: true
Tag: py3-none-any
from typing import Sequence, Any, Dict, Tuple, Callable, Optional, TypeVar, Union
import inspect
try: # Python 3.8
import importlib.metadata as importlib_metadata
except ImportError:
from . import _importlib_metadata as importlib_metadata # type: ignore
# Only ever call this once for performance reasons
AVAILABLE_ENTRY_POINTS = importlib_metadata.entry_points() # type: ignore
# This is where functions will be registered
REGISTRY: Dict[Tuple[str, ...], Any] = {}
InFunc = TypeVar("InFunc")
def create(*namespace: str, entry_points: bool = False) -> "Registry":
"""Create a new registry.
*namespace (str): The namespace, e.g. "spacy" or "spacy", "architectures".
entry_points (bool): Accept registered functions from entry points.
RETURNS (Registry): The Registry object.
"""
if check_exists(*namespace):
raise RegistryError(f"Namespace already exists: {namespace}")
return Registry(namespace, entry_points=entry_points)
class Registry(object):
def __init__(self, namespace: Sequence[str], entry_points: bool = False) -> None:
"""Initialize a new registry.
namespace (Sequence[str]): The namespace.
entry_points (bool): Whether to also check for entry points.
"""
self.namespace = namespace
self.entry_point_namespace = "_".join(namespace)
self.entry_points = entry_points
def __contains__(self, name: str) -> bool:
"""Check whether a name is in the registry.
name (str): The name to check.
RETURNS (bool): Whether the name is in the registry.
"""
namespace = tuple(list(self.namespace) + [name])
has_entry_point = self.entry_points and self.get_entry_point(name)
return has_entry_point or namespace in REGISTRY
def __call__(
self, name: str, func: Optional[Any] = None
) -> Callable[[InFunc], InFunc]:
"""Register a function for a given namespace. Same as Registry.register.
name (str): The name to register under the namespace.
func (Any): Optional function to register (if not used as decorator).
RETURNS (Callable): The decorator.
"""
return self.register(name, func=func)
def register(
self, name: str, *, func: Optional[Any] = None
) -> Callable[[InFunc], InFunc]:
"""Register a function for a given namespace.
name (str): The name to register under the namespace.
func (Any): Optional function to register (if not used as decorator).
RETURNS (Callable): The decorator.
"""
def do_registration(func):
_set(list(self.namespace) + [name], func)
return func
if func is not None:
return do_registration(func)
return do_registration
def get(self, name: str) -> Any:
"""Get the registered function for a given name.
name (str): The name.
RETURNS (Any): The registered function.
"""
if self.entry_points:
from_entry_point = self.get_entry_point(name)
if from_entry_point:
return from_entry_point
namespace = list(self.namespace) + [name]
if not check_exists(*namespace):
current_namespace = " -> ".join(self.namespace)
available = ", ".join(sorted(self.get_all().keys())) or "none"
raise RegistryError(
f"Cant't find '{name}' in registry {current_namespace}. Available names: {available}"
)
return _get(namespace)
def get_all(self) -> Dict[str, Any]:
"""Get a all functions for a given namespace.
namespace (Tuple[str]): The namespace to get.
RETURNS (Dict[str, Any]): The functions, keyed by name.
"""
global REGISTRY
result = {}
if self.entry_points:
result.update(self.get_entry_points())
for keys, value in REGISTRY.items():
if len(self.namespace) == len(keys) - 1 and all(
self.namespace[i] == keys[i] for i in range(len(self.namespace))
):
result[keys[-1]] = value
return result
def get_entry_points(self) -> Dict[str, Any]:
"""Get registered entry points from other packages for this namespace.
RETURNS (Dict[str, Any]): Entry points, keyed by name.
"""
result = {}
for entry_point in AVAILABLE_ENTRY_POINTS.get(self.entry_point_namespace, []):
result[entry_point.name] = entry_point.load()
return result
def get_entry_point(self, name: str, default: Optional[Any] = None) -> Any:
"""Check if registered entry point is available for a given name in the
namespace and load it. Otherwise, return the default value.
name (str): Name of entry point to load.
default (Any): The default value to return.
RETURNS (Any): The loaded entry point or the default value.
"""
for entry_point in AVAILABLE_ENTRY_POINTS.get(self.entry_point_namespace, []):
if entry_point.name == name:
return entry_point.load()
return default
def find(self, name: str) -> Dict[str, Optional[Union[str, int]]]:
"""Find the information about a registered function, including the
module and path to the file it's defined in, the line number and the
docstring, if available.
name (str): Name of the registered function.
RETURNS (Dict[str, Optional[Union[str, int]]]): The function info.
"""
func = self.get(name)
module = inspect.getmodule(func)
# These calls will fail for Cython modules so we need to work around them
line_no: Optional[int] = None
file_name: Optional[str] = None
try:
_, line_no = inspect.getsourcelines(func)
file_name = inspect.getfile(func)
except (TypeError, ValueError):
pass
docstring = inspect.getdoc(func)
return {
"module": module.__name__ if module else None,
"file": file_name,
"line_no": line_no,
"docstring": inspect.cleandoc(docstring) if docstring else None,
}
def check_exists(*namespace: str) -> bool:
"""Check if a namespace exists.
*namespace (str): The namespace.
RETURNS (bool): Whether the namespace exists.
"""
return namespace in REGISTRY
def _get(namespace: Sequence[str]) -> Any:
"""Get the value for a given namespace.
namespace (Sequence[str]): The namespace.
RETURNS (Any): The value for the namespace.
"""
global REGISTRY
if not all(isinstance(name, str) for name in namespace):
raise ValueError(
f"Invalid namespace. Expected tuple of strings, but got: {namespace}"
)
namespace = tuple(namespace)
if namespace not in REGISTRY:
raise RegistryError(f"Can't get namespace {namespace} (not in registry)")
return REGISTRY[namespace]
def _get_all(namespace: Sequence[str]) -> Dict[Tuple[str, ...], Any]:
"""Get all matches for a given namespace, e.g. ("a", "b", "c") and
("a", "b") for namespace ("a", "b").
namespace (Sequence[str]): The namespace.
RETURNS (Dict[Tuple[str], Any]): All entries for the namespace, keyed
by their full namespaces.
"""
global REGISTRY
result = {}
for keys, value in REGISTRY.items():
if len(namespace) <= len(keys) and all(
namespace[i] == keys[i] for i in range(len(namespace))
):
result[keys] = value
return result
def _set(namespace: Sequence[str], func: Any) -> None:
"""Set a value for a given namespace.
namespace (Sequence[str]): The namespace.
func (Callable): The value to set.
"""
global REGISTRY
REGISTRY[tuple(namespace)] = func
def _remove(namespace: Sequence[str]) -> Any:
"""Remove a value for a given namespace.
namespace (Sequence[str]): The namespace.
RETURNS (Any): The removed value.
"""
global REGISTRY
namespace = tuple(namespace)
if namespace not in REGISTRY:
raise RegistryError(f"Can't get namespace {namespace} (not in registry)")
removed = REGISTRY[namespace]
del REGISTRY[namespace]
return removed
class RegistryError(ValueError):
pass
This diff is collapsed.
This package contains a modified version of ca-bundle.crt:
ca-bundle.crt -- Bundle of CA Root Certificates
Certificate data from Mozilla as of: Thu Nov 3 19:04:19 2011#
This is a bundle of X.509 certificates of public Certificate Authorities
(CA). These were automatically extracted from Mozilla's root certificates
file (certdata.txt). This file can be found in the mozilla source tree:
http://mxr.mozilla.org/mozilla/source/security/nss/lib/ckfw/builtins/certdata.txt?raw=1#
It contains the certificates in PEM format and therefore
can be directly used with curl / libcurl / php_curl, or with
an Apache+mod_ssl webserver for SSL client authentication.
Just configure this file as the SSLCACertificateFile.#
***** BEGIN LICENSE BLOCK *****
This Source Code Form is subject to the terms of the Mozilla Public License,
v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain
one at http://mozilla.org/MPL/2.0/.
***** END LICENSE BLOCK *****
@(#) $RCSfile: certdata.txt,v $ $Revision: 1.80 $ $Date: 2011/11/03 15:11:58 $
This diff is collapsed.
certifi-2021.10.8.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
certifi-2021.10.8.dist-info/LICENSE,sha256=vp2C82ES-Hp_HXTs1Ih-FGe7roh4qEAEoAEXseR1o-I,1049
certifi-2021.10.8.dist-info/METADATA,sha256=iB_zbT1uX_8_NC7iGv0YEB-9b3idhQwHrFTSq8R1kD8,2994
certifi-2021.10.8.dist-info/RECORD,,
certifi-2021.10.8.dist-info/WHEEL,sha256=ADKeyaGyKF5DwBNE0sRE5pvW-bSkFMJfBuhzZ3rceP4,110
certifi-2021.10.8.dist-info/top_level.txt,sha256=KMu4vUCfsjLrkPbSNdgdekS-pVJzBAJFO__nI8NF6-U,8
certifi/__init__.py,sha256=xWdRgntT3j1V95zkRipGOg_A1UfEju2FcpujhysZLRI,62
certifi/__main__.py,sha256=xBBoj905TUWBLRGANOcf7oi6e-3dMP4cEoG9OyMs11g,243
certifi/__pycache__/__init__.cpython-38.pyc,,
certifi/__pycache__/__main__.cpython-38.pyc,,
certifi/__pycache__/core.cpython-38.pyc,,
certifi/cacert.pem,sha256=-og4Keu4zSpgL5shwfhd4kz0eUnVILzrGCi0zRy2kGw,265969
certifi/core.py,sha256=V0uyxKOYdz6ulDSusclrLmjbPgOXsD0BnEf0SQ7OnoE,2303
Wheel-Version: 1.0
Generator: bdist_wheel (0.35.1)
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment