GITENV file updated 15

parent 327bdcc5

Too many changes to show.

To preserve performance only 1000 of 1000+ files are displayed.

This diff is collapsed.
Building and installing SciPy
+++++++++++++++++++++++++++++
See https://www.scipy.org/install.html
.. Contents::
INTRODUCTION
============
It is *strongly* recommended that you use either a complete scientific Python
distribution or binary packages on your platform if they are available, in
particular on Windows and Mac OS X. You should not attempt to build SciPy if
you are not familiar with compiling software from sources.
Recommended distributions are:
- Enthought Canopy (https://www.enthought.com/products/canopy/)
- Anaconda (https://www.anaconda.com)
- Python(x,y) (https://python-xy.github.io/)
- WinPython (https://winpython.github.io/)
The rest of this install documentation summarizes how to build Scipy. Note
that more extensive (and possibly more up-to-date) build instructions are
maintained at https://scipy.github.io/devdocs/building/
PREREQUISITES
=============
SciPy requires the following software installed for your platform:
1) Python__ >= 3.7
__ https://www.python.org
2) NumPy__ >= 1.16.5
__ https://www.numpy.org/
If building from source, SciPy also requires:
3) setuptools__
__ https://github.com/pypa/setuptools
4) pybind11__ >= 2.4.3
__ https://github.com/pybind/pybind11
5) If you want to build the documentation: Sphinx__ >= 2.4.0 and < 3.1.0
__ http://www.sphinx-doc.org/
6) If you want to build SciPy master or other unreleased version from source
(Cython-generated C sources are included in official releases):
Cython__ >= 0.29.18
__ http://cython.org/
Windows
-------
Compilers
~~~~~~~~~
There are two ways to build SciPy on Windows:
1. Use Intel MKL, and Intel compilers or ifort + MSVC. This is what Anaconda
and Enthought Canopy use.
2. Use MSVC + GFortran with OpenBLAS. This is how the SciPy Windows wheels are
built.
Mac OS X
--------
It is recommended to use GCC or Clang, both work fine. Gcc is available for
free when installing Xcode, the developer toolsuite on Mac OS X. You also
need a Fortran compiler, which is not included with Xcode: you should use a
recent GFortran from an OS X package manager (like Homebrew).
Please do NOT use GFortran from `hpc.sourceforge.net <http://hpc.sourceforge.net>`_,
it is known to generate buggy SciPy binaries.
You should also use a BLAS/LAPACK library from an OS X package manager.
ATLAS, OpenBLAS, and MKL all work.
As of SciPy version 1.2.0, we do not support compiling against the system
Accelerate library for BLAS and LAPACK. It does not support a sufficiently
recent LAPACK interface.
Linux
-----
Most common distributions include all the dependencies. You will need to
install a BLAS/LAPACK (all of ATLAS, OpenBLAS, MKL work fine) including
development headers, as well as development headers for Python itself. Those
are typically packaged as python-dev.
INSTALLING SCIPY
================
For the latest information, see the website:
https://www.scipy.org
Development version from Git
----------------------------
Use the command::
git clone https://github.com/scipy/scipy.git
cd scipy
git clean -xdf
python setup.py install --user
Documentation
-------------
Type::
cd scipy/doc
make html
From tarballs
-------------
Unpack ``SciPy-<version>.tar.gz``, change to the ``SciPy-<version>/``
directory, and run::
pip install . -v --user
This may take several minutes to half an hour depending on the speed of your
computer.
TESTING
=======
To test SciPy after installation (highly recommended), execute in Python::
>>> import scipy
>>> scipy.test()
To run the full test suite use::
>>> scipy.test('full')
If you are upgrading from an older SciPy release, please test your code for any
deprecation warnings before and after upgrading to avoid surprises:
$ python -Wd -c my_code_that_shouldnt_break.py
Please note that you must have version 1.0 or later of the Pytest test
framework installed in order to run the tests. More information about Pytest is
available on the website__.
__ https://pytest.org/
COMPILER NOTES
==============
You can specify which Fortran compiler to use by using the following
install command::
python setup.py config_fc --fcompiler=<Vendor> install
To see a valid list of <Vendor> names, run::
python setup.py config_fc --help-fcompiler
IMPORTANT: It is highly recommended that all libraries that SciPy uses (e.g.
BLAS and ATLAS libraries) are built with the same Fortran compiler. In most
cases, if you mix compilers, you will not be able to import SciPy at best, and will have
crashes and random results at worst.
UNINSTALLING
============
When installing with ``python setup.py install`` or a variation on that, you do
not get proper uninstall behavior for an older already installed SciPy version.
In many cases that's not a problem, but if it turns out to be an issue, you
need to manually uninstall it first (remove from e.g. in
``/usr/lib/python3.4/site-packages/scipy`` or
``$HOME/lib/python3.4/site-packages/scipy``).
Alternatively, you can use ``pip install . --user`` instead of ``python
setup.py install --user`` in order to get reliable uninstall behavior.
The downside is that ``pip`` doesn't show you a build log and doesn't support
incremental rebuilds (it copies the whole source tree to a tempdir).
TROUBLESHOOTING
===============
If you experience problems when building/installing/testing SciPy, you
can ask help from scipy-user@python.org or scipy-dev@python.org mailing
lists. Please include the following information in your message:
NOTE: You can generate some of the following information (items 1-5,7)
in one command::
python -c 'from numpy.f2py.diagnose import run; run()'
1) Platform information::
python -c 'import os, sys; print(os.name, sys.platform)'
uname -a
OS, its distribution name and version information
etc.
2) Information about C, C++, Fortran compilers/linkers as reported by
the compilers when requesting their version information, e.g.,
the output of
::
gcc -v
g77 --version
3) Python version::
python -c 'import sys; print(sys.version)'
4) NumPy version::
python -c 'import numpy; print(numpy.__version__)'
5) ATLAS version, the locations of atlas and lapack libraries, building
information if any. If you have ATLAS version 3.3.6 or newer, then
give the output of the last command in
::
cd scipy/Lib/linalg
python setup_atlas_version.py build_ext --inplace --force
python -c 'import atlas_version'
7) The output of the following commands
::
python INSTALLDIR/numpy/distutils/system_info.py
where INSTALLDIR is, for example, /usr/lib/python3.4/site-packages/.
8) Feel free to add any other relevant information.
For example, the full output (both stdout and stderr) of the SciPy
installation command can be very helpful. Since this output can be
rather large, ask before sending it into the mailing list (or
better yet, to one of the developers, if asked).
9) In case of failing to import extension modules, the output of
::
ldd /path/to/ext_module.so
can be useful.
This diff is collapsed.
This diff is collapsed.
# This file is generated by numpy's setup.py
# It contains system_info results at the time of building this package.
__all__ = ["get_info","show"]
import os
import sys
extra_dll_dir = os.path.join(os.path.dirname(__file__), '.libs')
if sys.platform == 'win32' and os.path.isdir(extra_dll_dir):
if sys.version_info >= (3, 8):
os.add_dll_directory(extra_dll_dir)
else:
os.environ.setdefault('PATH', '')
os.environ['PATH'] += os.pathsep + extra_dll_dir
lapack_mkl_info={}
openblas_lapack_info={'library_dirs': ['C:\\projects\\scipy-wheels\\scipy\\build\\openblas_lapack_info'], 'libraries': ['openblas_lapack_info'], 'language': 'f77', 'define_macros': [('HAVE_CBLAS', None)]}
lapack_opt_info={'library_dirs': ['C:\\projects\\scipy-wheels\\scipy\\build\\openblas_lapack_info'], 'libraries': ['openblas_lapack_info'], 'language': 'f77', 'define_macros': [('HAVE_CBLAS', None)]}
blas_mkl_info={}
blis_info={}
openblas_info={'library_dirs': ['C:\\projects\\scipy-wheels\\scipy\\build\\openblas_info'], 'libraries': ['openblas_info'], 'language': 'f77', 'define_macros': [('HAVE_CBLAS', None)]}
blas_opt_info={'library_dirs': ['C:\\projects\\scipy-wheels\\scipy\\build\\openblas_info'], 'libraries': ['openblas_info'], 'language': 'f77', 'define_macros': [('HAVE_CBLAS', None)]}
def get_info(name):
g = globals()
return g.get(name, g.get(name + "_info", {}))
def show():
"""
Show libraries in the system on which NumPy was built.
Print information about various resources (libraries, library
directories, include directories, etc.) in the system on which
NumPy was built.
See Also
--------
get_include : Returns the directory containing NumPy C
header files.
Notes
-----
Classes specifying the information to be printed are defined
in the `numpy.distutils.system_info` module.
Information may include:
* ``language``: language used to write the libraries (mostly
C or f77)
* ``libraries``: names of libraries found in the system
* ``library_dirs``: directories containing the libraries
* ``include_dirs``: directories containing library header files
* ``src_dirs``: directories containing library source files
* ``define_macros``: preprocessor macros used by
``distutils.setup``
Examples
--------
>>> np.show_config()
blas_opt_info:
language = c
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
"""
for name,info_dict in globals().items():
if name[0] == "_" or type(info_dict) is not type({}): continue
print(name + ":")
if not info_dict:
print(" NOT AVAILABLE")
for k,v in info_dict.items():
v = str(v)
if k == "sources" and len(v) > 200:
v = v[:60] + " ...\n... " + v[-60:]
print(" %s = %s" % (k,v))
"""
SciPy: A scientific computing package for Python
================================================
Documentation is available in the docstrings and
online at https://docs.scipy.org.
Contents
--------
SciPy imports all the functions from the NumPy namespace, and in
addition provides:
Subpackages
-----------
Using any of these subpackages requires an explicit import. For example,
``import scipy.cluster``.
::
cluster --- Vector Quantization / Kmeans
fft --- Discrete Fourier transforms
fftpack --- Legacy discrete Fourier transforms
integrate --- Integration routines
interpolate --- Interpolation Tools
io --- Data input and output
linalg --- Linear algebra routines
linalg.blas --- Wrappers to BLAS library
linalg.lapack --- Wrappers to LAPACK library
misc --- Various utilities that don't have
another home.
ndimage --- N-D image package
odr --- Orthogonal Distance Regression
optimize --- Optimization Tools
signal --- Signal Processing Tools
signal.windows --- Window functions
sparse --- Sparse Matrices
sparse.linalg --- Sparse Linear Algebra
sparse.linalg.dsolve --- Linear Solvers
sparse.linalg.dsolve.umfpack --- :Interface to the UMFPACK library:
Conjugate Gradient Method (LOBPCG)
sparse.linalg.eigen --- Sparse Eigenvalue Solvers
sparse.linalg.eigen.lobpcg --- Locally Optimal Block Preconditioned
Conjugate Gradient Method (LOBPCG)
spatial --- Spatial data structures and algorithms
special --- Special functions
stats --- Statistical Functions
Utility tools
-------------
::
test --- Run scipy unittests
show_config --- Show scipy build configuration
show_numpy_config --- Show numpy build configuration
__version__ --- SciPy version string
__numpy_version__ --- Numpy version string
"""
def __dir__():
return ['test']
__all__ = __dir__()
from numpy import show_config as show_numpy_config
if show_numpy_config is None:
raise ImportError(
"Cannot import SciPy when running from NumPy source directory.")
from numpy import __version__ as __numpy_version__
# Import numpy symbols to scipy name space (DEPRECATED)
from ._lib.deprecation import _deprecated
import numpy as _num
linalg = None
_msg = ('scipy.{0} is deprecated and will be removed in SciPy 2.0.0, '
'use numpy.{0} instead')
# deprecate callable objects, skipping classes
for _key in _num.__all__:
_fun = getattr(_num, _key)
if callable(_fun) and not isinstance(_fun, type):
_fun = _deprecated(_msg.format(_key))(_fun)
globals()[_key] = _fun
from numpy.random import rand, randn
_msg = ('scipy.{0} is deprecated and will be removed in SciPy 2.0.0, '
'use numpy.random.{0} instead')
rand = _deprecated(_msg.format('rand'))(rand)
randn = _deprecated(_msg.format('randn'))(randn)
# fft is especially problematic, so was removed in SciPy 1.6.0
from numpy.fft import ifft
ifft = _deprecated('scipy.ifft is deprecated and will be removed in SciPy '
'2.0.0, use scipy.fft.ifft instead')(ifft)
import numpy.lib.scimath as _sci
_msg = ('scipy.{0} is deprecated and will be removed in SciPy 2.0.0, '
'use numpy.lib.scimath.{0} instead')
for _key in _sci.__all__:
_fun = getattr(_sci, _key)
if callable(_fun):
_fun = _deprecated(_msg.format(_key))(_fun)
globals()[_key] = _fun
__all__ += _num.__all__
__all__ += ['randn', 'rand', 'ifft']
del _num
# Remove the linalg imported from NumPy so that the scipy.linalg package can be
# imported.
del linalg
__all__.remove('linalg')
# We first need to detect if we're being called as part of the SciPy
# setup procedure itself in a reliable manner.
try:
__SCIPY_SETUP__
except NameError:
__SCIPY_SETUP__ = False
if __SCIPY_SETUP__:
import sys as _sys
_sys.stderr.write('Running from SciPy source directory.\n')
del _sys
else:
try:
from scipy.__config__ import show as show_config
except ImportError as e:
msg = """Error importing SciPy: you cannot import SciPy while
being in scipy source directory; please exit the SciPy source
tree first and relaunch your Python interpreter."""
raise ImportError(msg) from e
from scipy.version import version as __version__
# Allow distributors to run custom init code
from . import _distributor_init
from scipy._lib import _pep440
# In maintenance branch, change to np_maxversion N+3 if numpy is at N
# See setup.py for more details
np_minversion = '1.16.5'
np_maxversion = '1.23.0'
if (_pep440.parse(__numpy_version__) < _pep440.Version(np_minversion) or
_pep440.parse(__numpy_version__) >= _pep440.Version(np_maxversion)):
import warnings
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
f" is required for this version of SciPy (detected "
f"version {__numpy_version__}",
UserWarning)
del _pep440
from scipy._lib._ccallback import LowLevelCallable
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
# This makes "from scipy import fft" return scipy.fft, not np.fft
del fft
import os
import numpy as np
from ._fortran import *
from .system_info import combine_dict
# Don't use the deprecated NumPy C API. Define this to a fixed version instead of
# NPY_API_VERSION in order not to break compilation for released SciPy versions
# when NumPy introduces a new deprecation. Use in setup.py::
#
# config.add_extension('_name', sources=['source_fname'], **numpy_nodepr_api)
#
numpy_nodepr_api = dict(define_macros=[("NPY_NO_DEPRECATED_API",
"NPY_1_9_API_VERSION")])
def uses_blas64():
return (os.environ.get("NPY_USE_BLAS_ILP64", "0") != "0")
def import_file(folder, module_name):
"""Import a file directly, avoiding importing scipy"""
import importlib
import pathlib
fname = pathlib.Path(folder) / f'{module_name}.py'
spec = importlib.util.spec_from_file_location(module_name, str(fname))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
This diff is collapsed.
"""
Helpers for detection of compiler features
"""
import tempfile
import os
import sys
from numpy.distutils.system_info import dict_append
def try_compile(compiler, code=None, flags=[], ext=None):
"""Returns True if the compiler is able to compile the given code"""
from distutils.errors import CompileError
from numpy.distutils.fcompiler import FCompiler
if code is None:
if isinstance(compiler, FCompiler):
code = " program main\n return\n end"
else:
code = 'int main (int argc, char **argv) { return 0; }'
ext = ext or compiler.src_extensions[0]
with tempfile.TemporaryDirectory() as temp_dir:
fname = os.path.join(temp_dir, 'main'+ext)
with open(fname, 'w') as f:
f.write(code)
try:
compiler.compile([fname], output_dir=temp_dir, extra_postargs=flags)
except CompileError:
return False
return True
def has_flag(compiler, flag, ext=None):
"""Returns True if the compiler supports the given flag"""
return try_compile(compiler, flags=[flag], ext=ext)
def get_cxx_std_flag(compiler):
"""Detects compiler flag for c++14, c++11, or None if not detected"""
# GNU C compiler documentation uses single dash:
# https://gcc.gnu.org/onlinedocs/gcc/Standards.html
# but silently understands two dashes, like --std=c++11 too.
# Other GCC compatible compilers, like Intel C Compiler on Linux do not.
gnu_flags = ['-std=c++14', '-std=c++11']
flags_by_cc = {
'msvc': ['/std:c++14', None],
'intelw': ['/Qstd=c++14', '/Qstd=c++11'],
'intelem': ['-std=c++14', '-std=c++11']
}
flags = flags_by_cc.get(compiler.compiler_type, gnu_flags)
for flag in flags:
if flag is None:
return None
if has_flag(compiler, flag, ext='.cpp'):
return flag
from numpy.distutils import log
log.warn('Could not detect c++ standard flag')
return None
def get_c_std_flag(compiler):
"""Detects compiler flag to enable C99"""
gnu_flag = '-std=c99'
flag_by_cc = {
'msvc': None,
'intelw': '/Qstd=c99',
'intelem': '-std=c99'
}
flag = flag_by_cc.get(compiler.compiler_type, gnu_flag)
if flag is None:
return None
if has_flag(compiler, flag, ext='.c'):
return flag
from numpy.distutils import log
log.warn('Could not detect c99 standard flag')
return None
def try_add_flag(args, compiler, flag, ext=None):
"""Appends flag to the list of arguments if supported by the compiler"""
if try_compile(compiler, flags=args+[flag], ext=ext):
args.append(flag)
def set_c_flags_hook(build_ext, ext):
"""Sets basic compiler flags for compiling C99 code"""
std_flag = get_c_std_flag(build_ext.compiler)
if std_flag is not None:
ext.extra_compile_args.append(std_flag)
def set_cxx_flags_hook(build_ext, ext):
"""Sets basic compiler flags for compiling C++11 code"""
cc = build_ext._cxx_compiler
args = ext.extra_compile_args
std_flag = get_cxx_std_flag(cc)
if std_flag is not None:
args.append(std_flag)
if sys.platform == 'darwin':
# Set min macOS version
min_macos_flag = '-mmacosx-version-min=10.9'
if has_flag(cc, min_macos_flag):
args.append(min_macos_flag)
ext.extra_link_args.append(min_macos_flag)
def set_cxx_flags_clib_hook(build_clib, build_info):
cc = build_clib.compiler
new_args = []
new_link_args = []
std_flag = get_cxx_std_flag(cc)
if std_flag is not None:
new_args.append(std_flag)
if sys.platform == 'darwin':
# Set min macOS version
min_macos_flag = '-mmacosx-version-min=10.9'
if has_flag(cc, min_macos_flag):
new_args.append(min_macos_flag)
new_link_args.append(min_macos_flag)
dict_append(build_info, extra_compiler_args=new_args,
extra_link_args=new_link_args)
def configuration(parent_package='', top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration('_build_utils', parent_package, top_path)
config.add_data_dir('tests')
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())
import warnings
import numpy as np
import numpy.distutils.system_info
from numpy.distutils.system_info import (system_info,
numpy_info,
NotFoundError,
BlasNotFoundError,
LapackNotFoundError,
AtlasNotFoundError,
LapackSrcNotFoundError,
BlasSrcNotFoundError,
dict_append,
get_info as old_get_info)
from scipy._lib import _pep440
def combine_dict(*dicts, **kw):
"""
Combine Numpy distutils style library configuration dictionaries.
Parameters
----------
*dicts
Dictionaries of keys. List-valued keys will be concatenated.
Otherwise, duplicate keys with different values result to
an error. The input arguments are not modified.
**kw
Keyword arguments are treated as an additional dictionary
(the first one, i.e., prepended).
Returns
-------
combined
Dictionary with combined values.
"""
new_dict = {}
for d in (kw,) + dicts:
for key, value in d.items():
if new_dict.get(key, None) is not None:
old_value = new_dict[key]
if isinstance(value, (list, tuple)):
if isinstance(old_value, (list, tuple)):
new_dict[key] = list(old_value) + list(value)
continue
elif value == old_value:
continue
raise ValueError("Conflicting configuration dicts: {!r} {!r}"
"".format(new_dict, d))
else:
new_dict[key] = value
return new_dict
if _pep440.parse(np.__version__) >= _pep440.Version("1.15.0.dev"):
# For new enough numpy.distutils, the ACCELERATE=None environment
# variable in the top-level setup.py is enough, so no need to
# customize BLAS detection.
get_info = old_get_info
else:
# For NumPy < 1.15.0, we need overrides.
def get_info(name, notfound_action=0):
# Special case our custom *_opt_info.
cls = {'lapack_opt': lapack_opt_info,
'blas_opt': blas_opt_info}.get(name.lower())
if cls is None:
return old_get_info(name, notfound_action)
return cls().get_info(notfound_action)
#
# The following is copypaste from numpy.distutils.system_info, with
# OSX Accelerate-related parts removed.
#
class lapack_opt_info(system_info):
notfounderror = LapackNotFoundError
def calc_info(self):
lapack_mkl_info = get_info('lapack_mkl')
if lapack_mkl_info:
self.set_info(**lapack_mkl_info)
return
openblas_info = get_info('openblas_lapack')
if openblas_info:
self.set_info(**openblas_info)
return
openblas_info = get_info('openblas_clapack')
if openblas_info:
self.set_info(**openblas_info)
return
atlas_info = get_info('atlas_3_10_threads')
if not atlas_info:
atlas_info = get_info('atlas_3_10')
if not atlas_info:
atlas_info = get_info('atlas_threads')
if not atlas_info:
atlas_info = get_info('atlas')
need_lapack = 0
need_blas = 0
info = {}
if atlas_info:
l = atlas_info.get('define_macros', [])
if ('ATLAS_WITH_LAPACK_ATLAS', None) in l \
or ('ATLAS_WITHOUT_LAPACK', None) in l:
need_lapack = 1
info = atlas_info
else:
warnings.warn(AtlasNotFoundError.__doc__, stacklevel=2)
need_blas = 1
need_lapack = 1
dict_append(info, define_macros=[('NO_ATLAS_INFO', 1)])
if need_lapack:
lapack_info = get_info('lapack')
#lapack_info = {} ## uncomment for testing
if lapack_info:
dict_append(info, **lapack_info)
else:
warnings.warn(LapackNotFoundError.__doc__, stacklevel=2)
lapack_src_info = get_info('lapack_src')
if not lapack_src_info:
warnings.warn(LapackSrcNotFoundError.__doc__, stacklevel=2)
return
dict_append(info, libraries=[('flapack_src', lapack_src_info)])
if need_blas:
blas_info = get_info('blas')
if blas_info:
dict_append(info, **blas_info)
else:
warnings.warn(BlasNotFoundError.__doc__, stacklevel=2)
blas_src_info = get_info('blas_src')
if not blas_src_info:
warnings.warn(BlasSrcNotFoundError.__doc__, stacklevel=2)
return
dict_append(info, libraries=[('fblas_src', blas_src_info)])
self.set_info(**info)
return
class blas_opt_info(system_info):
notfounderror = BlasNotFoundError
def calc_info(self):
blas_mkl_info = get_info('blas_mkl')
if blas_mkl_info:
self.set_info(**blas_mkl_info)
return
blis_info = get_info('blis')
if blis_info:
self.set_info(**blis_info)
return
openblas_info = get_info('openblas')
if openblas_info:
self.set_info(**openblas_info)
return
atlas_info = get_info('atlas_3_10_blas_threads')
if not atlas_info:
atlas_info = get_info('atlas_3_10_blas')
if not atlas_info:
atlas_info = get_info('atlas_blas_threads')
if not atlas_info:
atlas_info = get_info('atlas_blas')
need_blas = 0
info = {}
if atlas_info:
info = atlas_info
else:
warnings.warn(AtlasNotFoundError.__doc__, stacklevel=2)
need_blas = 1
dict_append(info, define_macros=[('NO_ATLAS_INFO', 1)])
if need_blas:
blas_info = get_info('blas')
if blas_info:
dict_append(info, **blas_info)
else:
warnings.warn(BlasNotFoundError.__doc__, stacklevel=2)
blas_src_info = get_info('blas_src')
if not blas_src_info:
warnings.warn(BlasSrcNotFoundError.__doc__, stacklevel=2)
return
dict_append(info, libraries=[('fblas_src', blas_src_info)])
self.set_info(**info)
return
import sys
import os
from Cython import Tempita as tempita
# XXX: If this import ever fails (does it really?), vendor either
# cython.tempita or numpy/npy_tempita.
def process_tempita(fromfile):
"""Process tempita templated file and write out the result.
The template file is expected to end in `.c.in` or `.pyx.in`:
E.g. processing `template.c.in` generates `template.c`.
"""
if not fromfile.endswith('.in'):
raise ValueError("Unexpected extension: %s" % fromfile)
from_filename = tempita.Template.from_filename
template = from_filename(fromfile,
encoding=sys.getdefaultencoding())
content = template.substitute()
outfile = os.path.splitext(fromfile)[0]
with open(outfile, 'w') as f:
f.write(content)
if __name__ == "__main__":
process_tempita(sys.argv[1])
import re
import scipy
from numpy.testing import assert_
def test_valid_scipy_version():
# Verify that the SciPy version is a valid one (no .post suffix or other
# nonsense). See NumPy issue gh-6431 for an issue caused by an invalid
# version.
version_pattern = r"^[0-9]+\.[0-9]+\.[0-9]+(|a[0-9]|b[0-9]|rc[0-9])"
dev_suffix = r"(\.dev0\+.+([0-9a-f]{7}|Unknown))"
if scipy.version.release:
res = re.match(version_pattern, scipy.__version__)
else:
res = re.match(version_pattern + dev_suffix, scipy.__version__)
assert_(res is not None, scipy.__version__)
""" Distributor init file
Distributors: you can add custom code here to support particular distributions
of scipy.
For example, this is a good place to put any checks for hardware requirements.
The scipy standard source distribution will not put code in this file, so you
can safely replace this file with your own version.
"""
import os
# on Windows SciPy loads important DLLs
# and the code below aims to alleviate issues with DLL
# path resolution portability with an absolute path DLL load
if os.name == 'nt':
from ctypes import WinDLL
import glob
# convention for storing / loading the DLL from
# scipy/.libs/, if present
libs_path = os.path.abspath(os.path.join(os.path.dirname(__file__),
'.libs'))
if os.path.isdir(libs_path):
# for Python >= 3.8, DLL resolution ignores the PATH variable
# and the current working directory; see release notes:
# https://docs.python.org/3/whatsnew/3.8.html#bpo-36085-whatsnew
# Only the system paths, the directory containing the DLL, and
# directories added with add_dll_directory() are searched for
# load-time dependencies with Python >= 3.8
# this module was originally added to support DLL resolution in
# Python 3.8 because of the changes described above--providing the
# absolute paths to the DLLs allowed for proper resolution/loading
# however, we also started to receive reports of problems with DLL
# resolution with Python 3.7 that were sometimes alleviated with
# inclusion of the _distributor_init.py module; see SciPy main
# repo gh-11826
# we noticed in scipy-wheels repo gh-70 that inclusion of
# _distributor_init.py in 32-bit wheels for Python 3.7 resulted
# in failures in DLL resolution (64-bit 3.7 did not)
# as a result, we decided to combine both the old (working directory)
# and new (absolute path to DLL location) DLL resolution mechanisms
# to improve the chances of resolving DLLs across a wider range of
# Python versions
# we did not experiment with manipulating the PATH environment variable
# to include libs_path; it is not immediately clear if this would have
# robustness or security advantages over changing working directories
# as done below
# we should remove the working directory shims when our minimum supported
# Python version is 3.8 and trust the improvements to secure DLL loading
# in the standard lib for Python >= 3.8
try:
owd = os.getcwd()
os.chdir(libs_path)
for filename in glob.glob(os.path.join(libs_path, '*dll')):
WinDLL(os.path.abspath(filename))
finally:
os.chdir(owd)
"""
Module containing private utility functions
===========================================
The ``scipy._lib`` namespace is empty (for now). Tests for all
utilities in submodules of ``_lib`` can be run with::
from scipy import _lib
_lib.test()
"""
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
'''Helper functions to get location of header files.'''
import pathlib
from typing import Union
def _boost_dir(ret_path: bool = False) -> Union[pathlib.Path, str]:
'''Directory where root Boost/ directory lives.'''
p = pathlib.Path(__file__).parent / 'boost'
return p if ret_path else str(p)
import sys as _sys
from keyword import iskeyword as _iskeyword
def _validate_names(typename, field_names, extra_field_names):
"""
Ensure that all the given names are valid Python identifiers that
do not start with '_'. Also check that there are no duplicates
among field_names + extra_field_names.
"""
for name in [typename] + field_names + extra_field_names:
if type(name) is not str:
raise TypeError('typename and all field names must be strings')
if not name.isidentifier():
raise ValueError('typename and all field names must be valid '
f'identifiers: {name!r}')
if _iskeyword(name):
raise ValueError('typename and all field names cannot be a '
f'keyword: {name!r}')
seen = set()
for name in field_names + extra_field_names:
if name.startswith('_'):
raise ValueError('Field names cannot start with an underscore: '
f'{name!r}')
if name in seen:
raise ValueError(f'Duplicate field name: {name!r}')
seen.add(name)
# Note: This code is adapted from CPython:Lib/collections/__init__.py
def _make_tuple_bunch(typename, field_names, extra_field_names=None,
module=None):
"""
Create a namedtuple-like class with additional attributes.
This function creates a subclass of tuple that acts like a namedtuple
and that has additional attributes.
The additional attributes are listed in `extra_field_names`. The
values assigned to these attributes are not part of the tuple.
The reason this function exists is to allow functions in SciPy
that currently return a tuple or a namedtuple to returned objects
that have additional attributes, while maintaining backwards
compatibility.
This should only be used to enhance *existing* functions in SciPy.
New functions are free to create objects as return values without
having to maintain backwards compatibility with an old tuple or
namedtuple return value.
Parameters
----------
typename : str
The name of the type.
field_names : list of str
List of names of the values to be stored in the tuple. These names
will also be attributes of instances, so the values in the tuple
can be accessed by indexing or as attributes. At least one name
is required. See the Notes for additional restrictions.
extra_field_names : list of str, optional
List of names of values that will be stored as attributes of the
object. See the notes for additional restrictions.
Returns
-------
cls : type
The new class.
Notes
-----
There are restrictions on the names that may be used in `field_names`
and `extra_field_names`:
* The names must be unique--no duplicates allowed.
* The names must be valid Python identifiers, and must not begin with
an underscore.
* The names must not be Python keywords (e.g. 'def', 'and', etc., are
not allowed).
Examples
--------
>>> from scipy._lib._bunch import _make_tuple_bunch
Create a class that acts like a namedtuple with length 2 (with field
names `x` and `y`) that will also have the attributes `w` and `beta`:
>>> Result = _make_tuple_bunch('Result', ['x', 'y'], ['w', 'beta'])
`Result` is the new class. We call it with keyword arguments to create
a new instance with given values.
>>> result1 = Result(x=1, y=2, w=99, beta=0.5)
>>> result1
Result(x=1, y=2, w=99, beta=0.5)
`result1` acts like a tuple of length 2:
>>> len(result1)
2
>>> result1[:]
(1, 2)
The values assigned when the instance was created are available as
attributes:
>>> result1.y
2
>>> result1.beta
0.5
"""
if len(field_names) == 0:
raise ValueError('field_names must contain at least one name')
if extra_field_names is None:
extra_field_names = []
_validate_names(typename, field_names, extra_field_names)
typename = _sys.intern(str(typename))
field_names = tuple(map(_sys.intern, field_names))
extra_field_names = tuple(map(_sys.intern, extra_field_names))
all_names = field_names + extra_field_names
arg_list = ', '.join(field_names)
full_list = ', '.join(all_names)
repr_fmt = ''.join(('(',
', '.join(f'{name}=%({name})r' for name in all_names),
')'))
tuple_new = tuple.__new__
_dict, _tuple, _zip = dict, tuple, zip
# Create all the named tuple methods to be added to the class namespace
s = f"""\
def __new__(_cls, {arg_list}, **extra_fields):
return _tuple_new(_cls, ({arg_list},))
def __init__(self, {arg_list}, **extra_fields):
for key in self._extra_fields:
if key not in extra_fields:
raise TypeError("missing keyword argument '%s'" % (key,))
for key, val in extra_fields.items():
if key not in self._extra_fields:
raise TypeError("unexpected keyword argument '%s'" % (key,))
self.__dict__[key] = val
def __setattr__(self, key, val):
raise AttributeError("can't set attribute %r of class %r"
% (key, self.__class__.__name__))
"""
del arg_list
namespace = {'_tuple_new': tuple_new,
'__builtins__': dict(TypeError=TypeError,
AttributeError=AttributeError),
'__name__': f'namedtuple_{typename}'}
exec(s, namespace)
__new__ = namespace['__new__']
__new__.__doc__ = f'Create new instance of {typename}({full_list})'
__init__ = namespace['__init__']
__init__.__doc__ = f'Instantiate instance of {typename}({full_list})'
__setattr__ = namespace['__setattr__']
def __repr__(self):
'Return a nicely formatted representation string'
return self.__class__.__name__ + repr_fmt % self._asdict()
def _asdict(self):
'Return a new dict which maps field names to their values.'
out = _dict(_zip(self._fields, self))
out.update(self.__dict__)
return out
def __getnewargs_ex__(self):
'Return self as a plain tuple. Used by copy and pickle.'
return _tuple(self), self.__dict__
# Modify function metadata to help with introspection and debugging
for method in (__new__, __repr__, _asdict, __getnewargs_ex__):
method.__qualname__ = f'{typename}.{method.__name__}'
# Build-up the class namespace dictionary
# and use type() to build the result class
class_namespace = {
'__doc__': f'{typename}({full_list})',
'_fields': field_names,
'__new__': __new__,
'__init__': __init__,
'__repr__': __repr__,
'__setattr__': __setattr__,
'_asdict': _asdict,
'_extra_fields': extra_field_names,
'__getnewargs_ex__': __getnewargs_ex__,
}
for index, name in enumerate(field_names):
doc = _sys.intern(f'Alias for field number {index}')
def _get(self, index=index):
return self[index]
class_namespace[name] = property(_get, doc=doc)
for name in extra_field_names:
doc = _sys.intern(f'Alias for name {name}')
def _get(self, name=name):
return self.__dict__[name]
class_namespace[name] = property(_get, doc=doc)
result = type(typename, (tuple,), class_namespace)
# For pickling to work, the __module__ variable needs to be set to the
# frame where the named tuple is created. Bypass this step in environments
# where sys._getframe is not defined (Jython for example) or sys._getframe
# is not defined for arguments greater than 0 (IronPython), or where the
# user has specified a particular module.
if module is None:
try:
module = _sys._getframe(1).f_globals.get('__name__', '__main__')
except (AttributeError, ValueError):
pass
if module is not None:
result.__module__ = module
__new__.__module__ = module
return result
from . import _ccallback_c
import ctypes
PyCFuncPtr = ctypes.CFUNCTYPE(ctypes.c_void_p).__bases__[0]
ffi = None
class CData:
pass
def _import_cffi():
global ffi, CData
if ffi is not None:
return
try:
import cffi
ffi = cffi.FFI()
CData = ffi.CData
except ImportError:
ffi = False
class LowLevelCallable(tuple):
"""
Low-level callback function.
Parameters
----------
function : {PyCapsule, ctypes function pointer, cffi function pointer}
Low-level callback function.
user_data : {PyCapsule, ctypes void pointer, cffi void pointer}
User data to pass on to the callback function.
signature : str, optional
Signature of the function. If omitted, determined from *function*,
if possible.
Attributes
----------
function
Callback function given.
user_data
User data given.
signature
Signature of the function.
Methods
-------
from_cython
Class method for constructing callables from Cython C-exported
functions.
Notes
-----
The argument ``function`` can be one of:
- PyCapsule, whose name contains the C function signature
- ctypes function pointer
- cffi function pointer
The signature of the low-level callback must match one of those expected
by the routine it is passed to.
If constructing low-level functions from a PyCapsule, the name of the
capsule must be the corresponding signature, in the format::
return_type (arg1_type, arg2_type, ...)
For example::
"void (double)"
"double (double, int *, void *)"
The context of a PyCapsule passed in as ``function`` is used as ``user_data``,
if an explicit value for ``user_data`` was not given.
"""
# Make the class immutable
__slots__ = ()
def __new__(cls, function, user_data=None, signature=None):
# We need to hold a reference to the function & user data,
# to prevent them going out of scope
item = cls._parse_callback(function, user_data, signature)
return tuple.__new__(cls, (item, function, user_data))
def __repr__(self):
return "LowLevelCallable({!r}, {!r})".format(self.function, self.user_data)
@property
def function(self):
return tuple.__getitem__(self, 1)
@property
def user_data(self):
return tuple.__getitem__(self, 2)
@property
def signature(self):
return _ccallback_c.get_capsule_signature(tuple.__getitem__(self, 0))
def __getitem__(self, idx):
raise ValueError()
@classmethod
def from_cython(cls, module, name, user_data=None, signature=None):
"""
Create a low-level callback function from an exported Cython function.
Parameters
----------
module : module
Cython module where the exported function resides
name : str
Name of the exported function
user_data : {PyCapsule, ctypes void pointer, cffi void pointer}, optional
User data to pass on to the callback function.
signature : str, optional
Signature of the function. If omitted, determined from *function*.
"""
try:
function = module.__pyx_capi__[name]
except AttributeError as e:
raise ValueError("Given module is not a Cython module with __pyx_capi__ attribute") from e
except KeyError as e:
raise ValueError("No function {!r} found in __pyx_capi__ of the module".format(name)) from e
return cls(function, user_data, signature)
@classmethod
def _parse_callback(cls, obj, user_data=None, signature=None):
_import_cffi()
if isinstance(obj, LowLevelCallable):
func = tuple.__getitem__(obj, 0)
elif isinstance(obj, PyCFuncPtr):
func, signature = _get_ctypes_func(obj, signature)
elif isinstance(obj, CData):
func, signature = _get_cffi_func(obj, signature)
elif _ccallback_c.check_capsule(obj):
func = obj
else:
raise ValueError("Given input is not a callable or a low-level callable (pycapsule/ctypes/cffi)")
if isinstance(user_data, ctypes.c_void_p):
context = _get_ctypes_data(user_data)
elif isinstance(user_data, CData):
context = _get_cffi_data(user_data)
elif user_data is None:
context = 0
elif _ccallback_c.check_capsule(user_data):
context = user_data
else:
raise ValueError("Given user data is not a valid low-level void* pointer (pycapsule/ctypes/cffi)")
return _ccallback_c.get_raw_capsule(func, signature, context)
#
# ctypes helpers
#
def _get_ctypes_func(func, signature=None):
# Get function pointer
func_ptr = ctypes.cast(func, ctypes.c_void_p).value
# Construct function signature
if signature is None:
signature = _typename_from_ctypes(func.restype) + " ("
for j, arg in enumerate(func.argtypes):
if j == 0:
signature += _typename_from_ctypes(arg)
else:
signature += ", " + _typename_from_ctypes(arg)
signature += ")"
return func_ptr, signature
def _typename_from_ctypes(item):
if item is None:
return "void"
elif item is ctypes.c_void_p:
return "void *"
name = item.__name__
pointer_level = 0
while name.startswith("LP_"):
pointer_level += 1
name = name[3:]
if name.startswith('c_'):
name = name[2:]
if pointer_level > 0:
name += " " + "*"*pointer_level
return name
def _get_ctypes_data(data):
# Get voidp pointer
return ctypes.cast(data, ctypes.c_void_p).value
#
# CFFI helpers
#
def _get_cffi_func(func, signature=None):
# Get function pointer
func_ptr = ffi.cast('uintptr_t', func)
# Get signature
if signature is None:
signature = ffi.getctype(ffi.typeof(func)).replace('(*)', ' ')
return func_ptr, signature
def _get_cffi_data(data):
# Get pointer
return ffi.cast('uintptr_t', data)
"""
Disjoint set data structure
"""
class DisjointSet:
""" Disjoint set data structure for incremental connectivity queries.
.. versionadded:: 1.6.0
Attributes
----------
n_subsets : int
The number of subsets.
Methods
-------
add
merge
connected
subset
subsets
__getitem__
Notes
-----
This class implements the disjoint set [1]_, also known as the *union-find*
or *merge-find* data structure. The *find* operation (implemented in
`__getitem__`) implements the *path halving* variant. The *merge* method
implements the *merge by size* variant.
References
----------
.. [1] https://en.wikipedia.org/wiki/Disjoint-set_data_structure
Examples
--------
>>> from scipy.cluster.hierarchy import DisjointSet
Initialize a disjoint set:
>>> disjoint_set = DisjointSet([1, 2, 3, 'a', 'b'])
Merge some subsets:
>>> disjoint_set.merge(1, 2)
True
>>> disjoint_set.merge(3, 'a')
True
>>> disjoint_set.merge('a', 'b')
True
>>> disjoint_set.merge('b', 'b')
False
Find root elements:
>>> disjoint_set[2]
1
>>> disjoint_set['b']
3
Test connectivity:
>>> disjoint_set.connected(1, 2)
True
>>> disjoint_set.connected(1, 'b')
False
List elements in disjoint set:
>>> list(disjoint_set)
[1, 2, 3, 'a', 'b']
Get the subset containing 'a':
>>> disjoint_set.subset('a')
{'a', 3, 'b'}
Get all subsets in the disjoint set:
>>> disjoint_set.subsets()
[{1, 2}, {'a', 3, 'b'}]
"""
def __init__(self, elements=None):
self.n_subsets = 0
self._sizes = {}
self._parents = {}
# _nbrs is a circular linked list which links connected elements.
self._nbrs = {}
# _indices tracks the element insertion order in `__iter__`.
self._indices = {}
if elements is not None:
for x in elements:
self.add(x)
def __iter__(self):
"""Returns an iterator of the elements in the disjoint set.
Elements are ordered by insertion order.
"""
return iter(self._indices)
def __len__(self):
return len(self._indices)
def __contains__(self, x):
return x in self._indices
def __getitem__(self, x):
"""Find the root element of `x`.
Parameters
----------
x : hashable object
Input element.
Returns
-------
root : hashable object
Root element of `x`.
"""
if x not in self._indices:
raise KeyError(x)
# find by "path halving"
parents = self._parents
while self._indices[x] != self._indices[parents[x]]:
parents[x] = parents[parents[x]]
x = parents[x]
return x
def add(self, x):
"""Add element `x` to disjoint set
"""
if x in self._indices:
return
self._sizes[x] = 1
self._parents[x] = x
self._nbrs[x] = x
self._indices[x] = len(self._indices)
self.n_subsets += 1
def merge(self, x, y):
"""Merge the subsets of `x` and `y`.
The smaller subset (the child) is merged into the larger subset (the
parent). If the subsets are of equal size, the root element which was
first inserted into the disjoint set is selected as the parent.
Parameters
----------
x, y : hashable object
Elements to merge.
Returns
-------
merged : bool
True if `x` and `y` were in disjoint sets, False otherwise.
"""
xr = self[x]
yr = self[y]
if self._indices[xr] == self._indices[yr]:
return False
sizes = self._sizes
if (sizes[xr], self._indices[yr]) < (sizes[yr], self._indices[xr]):
xr, yr = yr, xr
self._parents[yr] = xr
self._sizes[xr] += self._sizes[yr]
self._nbrs[xr], self._nbrs[yr] = self._nbrs[yr], self._nbrs[xr]
self.n_subsets -= 1
return True
def connected(self, x, y):
"""Test whether `x` and `y` are in the same subset.
Parameters
----------
x, y : hashable object
Elements to test.
Returns
-------
result : bool
True if `x` and `y` are in the same set, False otherwise.
"""
return self._indices[self[x]] == self._indices[self[y]]
def subset(self, x):
"""Get the subset containing `x`.
Parameters
----------
x : hashable object
Input element.
Returns
-------
result : set
Subset containing `x`.
"""
if x not in self._indices:
raise KeyError(x)
result = [x]
nxt = self._nbrs[x]
while self._indices[nxt] != self._indices[x]:
result.append(nxt)
nxt = self._nbrs[nxt]
return set(result)
def subsets(self):
"""Get all the subsets in the disjoint set.
Returns
-------
result : list
Subsets in the disjoint set.
"""
result = []
visited = set()
for x in self:
if x not in visited:
xset = self.subset(x)
visited.update(xset)
result.append(xset)
return result
"""
Module for testing automatic garbage collection of objects
.. autosummary::
:toctree: generated/
set_gc_state - enable or disable garbage collection
gc_state - context manager for given state of garbage collector
assert_deallocated - context manager to check for circular references on object
"""
import weakref
import gc
from contextlib import contextmanager
from platform import python_implementation
__all__ = ['set_gc_state', 'gc_state', 'assert_deallocated']
IS_PYPY = python_implementation() == 'PyPy'
class ReferenceError(AssertionError):
pass
def set_gc_state(state):
""" Set status of garbage collector """
if gc.isenabled() == state:
return
if state:
gc.enable()
else:
gc.disable()
@contextmanager
def gc_state(state):
""" Context manager to set state of garbage collector to `state`
Parameters
----------
state : bool
True for gc enabled, False for disabled
Examples
--------
>>> with gc_state(False):
... assert not gc.isenabled()
>>> with gc_state(True):
... assert gc.isenabled()
"""
orig_state = gc.isenabled()
set_gc_state(state)
yield
set_gc_state(orig_state)
@contextmanager
def assert_deallocated(func, *args, **kwargs):
"""Context manager to check that object is deallocated
This is useful for checking that an object can be freed directly by
reference counting, without requiring gc to break reference cycles.
GC is disabled inside the context manager.
This check is not available on PyPy.
Parameters
----------
func : callable
Callable to create object to check
\\*args : sequence
positional arguments to `func` in order to create object to check
\\*\\*kwargs : dict
keyword arguments to `func` in order to create object to check
Examples
--------
>>> class C: pass
>>> with assert_deallocated(C) as c:
... # do something
... del c
>>> class C:
... def __init__(self):
... self._circular = self # Make circular reference
>>> with assert_deallocated(C) as c: #doctest: +IGNORE_EXCEPTION_DETAIL
... # do something
... del c
Traceback (most recent call last):
...
ReferenceError: Remaining reference(s) to object
"""
if IS_PYPY:
raise RuntimeError("assert_deallocated is unavailable on PyPy")
with gc_state(False):
obj = func(*args, **kwargs)
ref = weakref.ref(obj)
yield obj
del obj
if ref() is not None:
raise ReferenceError("Remaining reference(s) to object")
This diff is collapsed.
"""
Generic test utilities.
"""
import os
import re
import sys
__all__ = ['PytestTester', 'check_free_memory']
class FPUModeChangeWarning(RuntimeWarning):
"""Warning about FPU mode change"""
pass
class PytestTester:
"""
Pytest test runner entry point.
"""
def __init__(self, module_name):
self.module_name = module_name
def __call__(self, label="fast", verbose=1, extra_argv=None, doctests=False,
coverage=False, tests=None, parallel=None):
import pytest
module = sys.modules[self.module_name]
module_path = os.path.abspath(module.__path__[0])
pytest_args = ['--showlocals', '--tb=short']
if doctests:
raise ValueError("Doctests not supported")
if extra_argv:
pytest_args += list(extra_argv)
if verbose and int(verbose) > 1:
pytest_args += ["-" + "v"*(int(verbose)-1)]
if coverage:
pytest_args += ["--cov=" + module_path]
if label == "fast":
pytest_args += ["-m", "not slow"]
elif label != "full":
pytest_args += ["-m", label]
if tests is None:
tests = [self.module_name]
if parallel is not None and parallel > 1:
if _pytest_has_xdist():
pytest_args += ['-n', str(parallel)]
else:
import warnings
warnings.warn('Could not run tests in parallel because '
'pytest-xdist plugin is not available.')
pytest_args += ['--pyargs'] + list(tests)
try:
code = pytest.main(pytest_args)
except SystemExit as exc:
code = exc.code
return (code == 0)
def _pytest_has_xdist():
"""
Check if the pytest-xdist plugin is installed, providing parallel tests
"""
# Check xdist exists without importing, otherwise pytests emits warnings
from importlib.util import find_spec
return find_spec('xdist') is not None
def check_free_memory(free_mb):
"""
Check *free_mb* of memory is available, otherwise do pytest.skip
"""
import pytest
try:
mem_free = _parse_size(os.environ['SCIPY_AVAILABLE_MEM'])
msg = '{0} MB memory required, but environment SCIPY_AVAILABLE_MEM={1}'.format(
free_mb, os.environ['SCIPY_AVAILABLE_MEM'])
except KeyError:
mem_free = _get_mem_available()
if mem_free is None:
pytest.skip("Could not determine available memory; set SCIPY_AVAILABLE_MEM "
"variable to free memory in MB to run the test.")
msg = '{0} MB memory required, but {1} MB available'.format(
free_mb, mem_free/1e6)
if mem_free < free_mb * 1e6:
pytest.skip(msg)
def _parse_size(size_str):
suffixes = {'': 1e6,
'b': 1.0,
'k': 1e3, 'M': 1e6, 'G': 1e9, 'T': 1e12,
'kb': 1e3, 'Mb': 1e6, 'Gb': 1e9, 'Tb': 1e12,
'kib': 1024.0, 'Mib': 1024.0**2, 'Gib': 1024.0**3, 'Tib': 1024.0**4}
m = re.match(r'^\s*(\d+)\s*({0})\s*$'.format('|'.join(suffixes.keys())),
size_str,
re.I)
if not m or m.group(2) not in suffixes:
raise ValueError("Invalid size string")
return float(m.group(1)) * suffixes[m.group(2)]
def _get_mem_available():
"""
Get information about memory available, not counting swap.
"""
try:
import psutil
return psutil.virtual_memory().available
except (ImportError, AttributeError):
pass
if sys.platform.startswith('linux'):
info = {}
with open('/proc/meminfo', 'r') as f:
for line in f:
p = line.split()
info[p[0].strip(':').lower()] = float(p[1]) * 1e3
if 'memavailable' in info:
# Linux >= 3.14
return info['memavailable']
else:
return info['memfree'] + info['cached']
return None
import threading
import scipy._lib.decorator
__all__ = ['ReentrancyError', 'ReentrancyLock', 'non_reentrant']
class ReentrancyError(RuntimeError):
pass
class ReentrancyLock:
"""
Threading lock that raises an exception for reentrant calls.
Calls from different threads are serialized, and nested calls from the
same thread result to an error.
The object can be used as a context manager or to decorate functions
via the decorate() method.
"""
def __init__(self, err_msg):
self._rlock = threading.RLock()
self._entered = False
self._err_msg = err_msg
def __enter__(self):
self._rlock.acquire()
if self._entered:
self._rlock.release()
raise ReentrancyError(self._err_msg)
self._entered = True
def __exit__(self, type, value, traceback):
self._entered = False
self._rlock.release()
def decorate(self, func):
def caller(func, *a, **kw):
with self:
return func(*a, **kw)
return scipy._lib.decorator.decorate(func, caller)
def non_reentrant(err_msg=None):
"""
Decorate a function with a threading lock and prevent reentrant calls.
"""
def decorator(func):
msg = err_msg
if msg is None:
msg = "%s is not re-entrant" % func.__name__
lock = ReentrancyLock(msg)
return lock.decorate(func)
return decorator
''' Contexts for *with* statement providing temporary directories
'''
import os
from contextlib import contextmanager
from shutil import rmtree
from tempfile import mkdtemp
@contextmanager
def tempdir():
"""Create and return a temporary directory. This has the same
behavior as mkdtemp but can be used as a context manager.
Upon exiting the context, the directory and everything contained
in it are removed.
Examples
--------
>>> import os
>>> with tempdir() as tmpdir:
... fname = os.path.join(tmpdir, 'example_file.txt')
... with open(fname, 'wt') as fobj:
... _ = fobj.write('a string\\n')
>>> os.path.exists(tmpdir)
False
"""
d = mkdtemp()
yield d
rmtree(d)
@contextmanager
def in_tempdir():
''' Create, return, and change directory to a temporary directory
Examples
--------
>>> import os
>>> my_cwd = os.getcwd()
>>> with in_tempdir() as tmpdir:
... _ = open('test.txt', 'wt').write('some text')
... assert os.path.isfile('test.txt')
... assert os.path.isfile(os.path.join(tmpdir, 'test.txt'))
>>> os.path.exists(tmpdir)
False
>>> os.getcwd() == my_cwd
True
'''
pwd = os.getcwd()
d = mkdtemp()
os.chdir(d)
yield d
os.chdir(pwd)
rmtree(d)
@contextmanager
def in_dir(dir=None):
""" Change directory to given directory for duration of ``with`` block
Useful when you want to use `in_tempdir` for the final test, but
you are still debugging. For example, you may want to do this in the end:
>>> with in_tempdir() as tmpdir:
... # do something complicated which might break
... pass
But, indeed, the complicated thing does break, and meanwhile, the
``in_tempdir`` context manager wiped out the directory with the
temporary files that you wanted for debugging. So, while debugging, you
replace with something like:
>>> with in_dir() as tmpdir: # Use working directory by default
... # do something complicated which might break
... pass
You can then look at the temporary file outputs to debug what is happening,
fix, and finally replace ``in_dir`` with ``in_tempdir`` again.
"""
cwd = os.getcwd()
if dir is None:
yield cwd
return
os.chdir(dir)
yield dir
os.chdir(cwd)
BSD 3-Clause License
Copyright (c) 2018, Quansight-Labs
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
.. note::
If you are looking for overrides for NumPy-specific methods, see the
documentation for :obj:`unumpy`. This page explains how to write
back-ends and multimethods.
``uarray`` is built around a back-end protocol and overridable multimethods.
It is necessary to define multimethods for back-ends to be able to override them.
See the documentation of :obj:`generate_multimethod` on how to write multimethods.
Let's start with the simplest:
``__ua_domain__`` defines the back-end *domain*. The domain consists of period-
separated string consisting of the modules you extend plus the submodule. For
example, if a submodule ``module2.submodule`` extends ``module1``
(i.e., it exposes dispatchables marked as types available in ``module1``),
then the domain string should be ``"module1.module2.submodule"``.
For the purpose of this demonstration, we'll be creating an object and setting
its attributes directly. However, note that you can use a module or your own type
as a backend as well.
>>> class Backend: pass
>>> be = Backend()
>>> be.__ua_domain__ = "ua_examples"
It might be useful at this point to sidetrack to the documentation of
:obj:`generate_multimethod` to find out how to generate a multimethod
overridable by :obj:`uarray`. Needless to say, writing a backend and
creating multimethods are mostly orthogonal activities, and knowing
one doesn't necessarily require knowledge of the other, although it
is certainly helpful. We expect core API designers/specifiers to write the
multimethods, and implementors to override them. But, as is often the case,
similar people write both.
Without further ado, here's an example multimethod:
>>> import uarray as ua
>>> from uarray import Dispatchable
>>> def override_me(a, b):
... return Dispatchable(a, int),
>>> def override_replacer(args, kwargs, dispatchables):
... return (dispatchables[0], args[1]), {}
>>> overridden_me = ua.generate_multimethod(
... override_me, override_replacer, "ua_examples"
... )
Next comes the part about overriding the multimethod. This requires
the ``__ua_function__`` protocol, and the ``__ua_convert__``
protocol. The ``__ua_function__`` protocol has the signature
``(method, args, kwargs)`` where ``method`` is the passed
multimethod, ``args``/``kwargs`` specify the arguments and ``dispatchables``
is the list of converted dispatchables passed in.
>>> def __ua_function__(method, args, kwargs):
... return method.__name__, args, kwargs
>>> be.__ua_function__ = __ua_function__
The other protocol of interest is the ``__ua_convert__`` protocol. It has the
signature ``(dispatchables, coerce)``. When ``coerce`` is ``False``, conversion
between the formats should ideally be an ``O(1)`` operation, but it means that
no memory copying should be involved, only views of the existing data.
>>> def __ua_convert__(dispatchables, coerce):
... for d in dispatchables:
... if d.type is int:
... if coerce and d.coercible:
... yield str(d.value)
... else:
... yield d.value
>>> be.__ua_convert__ = __ua_convert__
Now that we have defined the backend, the next thing to do is to call the multimethod.
>>> with ua.set_backend(be):
... overridden_me(1, "2")
('override_me', (1, '2'), {})
Note that the marked type has no effect on the actual type of the passed object.
We can also coerce the type of the input.
>>> with ua.set_backend(be, coerce=True):
... overridden_me(1, "2")
... overridden_me(1.0, "2")
('override_me', ('1', '2'), {})
('override_me', ('1.0', '2'), {})
Another feature is that if you remove ``__ua_convert__``, the arguments are not
converted at all and it's up to the backend to handle that.
>>> del be.__ua_convert__
>>> with ua.set_backend(be):
... overridden_me(1, "2")
('override_me', (1, '2'), {})
You also have the option to return ``NotImplemented``, in which case processing moves on
to the next back-end, which, in this case, doesn't exist. The same applies to
``__ua_convert__``.
>>> be.__ua_function__ = lambda *a, **kw: NotImplemented
>>> with ua.set_backend(be):
... overridden_me(1, "2")
Traceback (most recent call last):
...
uarray.backend.BackendNotImplementedError: ...
The last possibility is if we don't have ``__ua_convert__``, in which case the job is left
up to ``__ua_function__``, but putting things back into arrays after conversion will not be
possible.
"""
from ._backend import *
__version__ = '0.5.1+49.g4c3f1d7.scipy'
This diff is collapsed.
def pre_build_hook(build_ext, ext):
from scipy._build_utils.compiler_helper import (
set_cxx_flags_hook, try_add_flag)
cc = build_ext._cxx_compiler
args = ext.extra_compile_args
set_cxx_flags_hook(build_ext, ext)
if cc.compiler_type == 'msvc':
args.append('/EHsc')
else:
try_add_flag(args, cc, '-fvisibility=hidden')
def configuration(parent_package='', top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration('_uarray', parent_package, top_path)
config.add_data_files('LICENSE')
ext = config.add_extension('_uarray',
sources=['_uarray_dispatch.cxx'],
language='c++')
ext._pre_build_hook = pre_build_hook
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())
This diff is collapsed.
This diff is collapsed.
import functools
import warnings
__all__ = ["_deprecated"]
def _deprecated(msg, stacklevel=2):
"""Deprecate a function by emitting a warning on use."""
def wrap(fun):
if isinstance(fun, type):
warnings.warn(
"Trying to deprecate class {!r}".format(fun),
category=RuntimeWarning, stacklevel=2)
return fun
@functools.wraps(fun)
def call(*args, **kwargs):
warnings.warn(msg, category=DeprecationWarning,
stacklevel=stacklevel)
return fun(*args, **kwargs)
call.__doc__ = msg
return call
return wrap
class _DeprecationHelperStr:
"""
Helper class used by deprecate_cython_api
"""
def __init__(self, content, message):
self._content = content
self._message = message
def __hash__(self):
return hash(self._content)
def __eq__(self, other):
res = (self._content == other)
if res:
warnings.warn(self._message, category=DeprecationWarning,
stacklevel=2)
return res
def deprecate_cython_api(module, routine_name, new_name=None, message=None):
"""
Deprecate an exported cdef function in a public Cython API module.
Only functions can be deprecated; typedefs etc. cannot.
Parameters
----------
module : module
Public Cython API module (e.g. scipy.linalg.cython_blas).
routine_name : str
Name of the routine to deprecate. May also be a fused-type
routine (in which case its all specializations are deprecated).
new_name : str
New name to include in the deprecation warning message
message : str
Additional text in the deprecation warning message
Examples
--------
Usually, this function would be used in the top-level of the
module ``.pyx`` file:
>>> from scipy._lib.deprecation import deprecate_cython_api
>>> import scipy.linalg.cython_blas as mod
>>> deprecate_cython_api(mod, "dgemm", "dgemm_new",
... message="Deprecated in Scipy 1.5.0")
>>> del deprecate_cython_api, mod
After this, Cython modules that use the deprecated function emit a
deprecation warning when they are imported.
"""
old_name = "{}.{}".format(module.__name__, routine_name)
if new_name is None:
depdoc = "`%s` is deprecated!" % old_name
else:
depdoc = "`%s` is deprecated, use `%s` instead!" % \
(old_name, new_name)
if message is not None:
depdoc += "\n" + message
d = module.__pyx_capi__
# Check if the function is a fused-type function with a mangled name
j = 0
has_fused = False
while True:
fused_name = "__pyx_fuse_{}{}".format(j, routine_name)
if fused_name in d:
has_fused = True
d[_DeprecationHelperStr(fused_name, depdoc)] = d.pop(fused_name)
j += 1
else:
break
# If not, apply deprecation to the named routine
if not has_fused:
d[_DeprecationHelperStr(routine_name, depdoc)] = d.pop(routine_name)
This diff is collapsed.
import os
import pathlib
def check_boost_submodule():
from scipy._lib._boost_utils import _boost_dir
if not os.path.exists(_boost_dir(ret_path=True) / 'README.md'):
raise RuntimeError("Missing the `boost` submodule! Run `git submodule "
"update --init` to fix this.")
def build_clib_pre_build_hook(cmd, ext):
from scipy._build_utils.compiler_helper import get_cxx_std_flag
std_flag = get_cxx_std_flag(cmd.compiler)
ext.setdefault('extra_compiler_args', [])
if std_flag is not None:
ext['extra_compiler_args'].append(std_flag)
def configuration(parent_package='',top_path=None):
from numpy.distutils.misc_util import Configuration
from scipy._lib._boost_utils import _boost_dir
check_boost_submodule()
config = Configuration('_lib', parent_package, top_path)
config.add_data_files('tests/*.py')
include_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), 'src'))
depends = [os.path.join(include_dir, 'ccallback.h')]
config.add_extension("_ccallback_c",
sources=["_ccallback_c.c"],
depends=depends,
include_dirs=[include_dir])
config.add_extension("_test_ccallback",
sources=["src/_test_ccallback.c"],
depends=depends,
include_dirs=[include_dir])
config.add_extension("_fpumode",
sources=["_fpumode.c"])
def get_messagestream_config(ext, build_dir):
# Generate a header file containing defines
config_cmd = config.get_config_cmd()
defines = []
if config_cmd.check_func('open_memstream', decl=True, call=True):
defines.append(('HAVE_OPEN_MEMSTREAM', '1'))
target = os.path.join(os.path.dirname(__file__), 'src',
'messagestream_config.h')
with open(target, 'w') as f:
for name, value in defines:
f.write('#define {0} {1}\n'.format(name, value))
depends = [os.path.join(include_dir, 'messagestream.h')]
config.add_extension("messagestream",
sources=["messagestream.c"] + [get_messagestream_config],
depends=depends,
include_dirs=[include_dir])
config.add_extension("_test_deprecation_call",
sources=["_test_deprecation_call.c"],
include_dirs=[include_dir])
config.add_extension("_test_deprecation_def",
sources=["_test_deprecation_def.c"],
include_dirs=[include_dir])
config.add_subpackage('_uarray')
# ensure Boost was checked out and builds
config.add_library(
'test_boost_build',
sources=['tests/test_boost_build.cpp'],
include_dirs=_boost_dir(),
language='c++',
_pre_build_hook=build_clib_pre_build_hook)
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())
This diff is collapsed.
This diff is collapsed.
import sys
from scipy._lib._testutils import _parse_size, _get_mem_available
import pytest
def test__parse_size():
expected = {
'12': 12e6,
'12 b': 12,
'12k': 12e3,
' 12 M ': 12e6,
' 12 G ': 12e9,
' 12Tb ': 12e12,
'12 Mib ': 12 * 1024.0**2,
'12Tib': 12 * 1024.0**4,
}
for inp, outp in sorted(expected.items()):
if outp is None:
with pytest.raises(ValueError):
_parse_size(inp)
else:
assert _parse_size(inp) == outp
def test__mem_available():
# May return None on non-Linux platforms
available = _get_mem_available()
if sys.platform.startswith('linux'):
assert available >= 0
else:
assert available is None or available >= 0
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
import pytest
def test_cython_api_deprecation():
match = ("`scipy._lib._test_deprecation_def.foo_deprecated` "
"is deprecated, use `foo` instead!\n"
"Deprecated in Scipy 42.0.0")
with pytest.warns(DeprecationWarning, match=match):
from .. import _test_deprecation_call
assert _test_deprecation_call.call() == (1, 1)
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment