nvidia.com: AmgX

nvidia.com: AmgX

AmgX provides a simple path to accelerated core solver technology on NVIDIA GPUs. AmgX provides up to 10x acceleration to the computationally intense linear solver portion of simulations, and is especially well suited for implicit unstructured methods.

It is a high performance, state-of-the-art library and includes a flexible solver composition system that allows a user to easily construct complex nested solvers and preconditioners.

AmgX is available with a commercial and a free license. The free license is limited to CUDA Registered Developers and non-commercial use.

Key Features

  1. Flexible configuration allows for nested solvers, smoothers, and preconditioners
  2. Ruge-Steuben algebraic multigrid
  3. Un-smoothed aggregation algebraic multigrid
  4. Krylov methods: PCG, GMRES, BiCGStab, and flexible variants
  5. Smoothers: Block-Jacobi, Gauss-Seidel, incomplete LU, Polynomial, dense LU
  6. Scalar or coupled block systems
  7. MPI support
  8. OpenMP support
  9. Flexible and simple high level C API

AmgX is free for non-commercial use and is available for download now for CUDA Registered Developers. As a registered developer you can download the latest version of AmgX, access the support forum and file bug reports. If you have not yet registered do so today.

pyfr.org: Python Flux Reconstruction

pyfr.org: Python Flux Reconstruction

PyFR is an open-source Python based framework for solving advection-diffusion type problems on streaming architectures using the Flux Reconstruction approach of Huynh. The framework is designed to solve a range of governing systems on mixed unstructured grids containing various element types. It is also designed to target a range of hardware platforms via use of an in-built domain specific language derived from the Mako templating engine. The current release (PyFR 1.0.0) has the following capabilities:

Governing Equations – Euler, Navier Stokes
Dimensionality – 2D, 3D
Element Types – Triangles, Quadrilaterals, Hexahedra, Prisms, Tetrahedra, Pyramids
Platforms – CPU Clusters, Nvidia GPU Clusters, AMD GPU Clusters
Spatial Discretisation – High-Order Flux Reconstruction
Temporal Discretisation – Explicit Runge-Kutta
Precision – Single, Double
Mesh Files Imported – Gmsh (.msh)
Solution Files Exported – Unstructured VTK (.vtu, .pvtu)

PyFR is being developed in the Vincent Lab, Department of Aeronautics, Imperial College London, UK.

Development of PyFR is supported by the Engineering and Physical Sciences Research Council, Innovate UK, the European Commission, BAE Systems, and Airbus. We are also grateful for hardware donations from Nvidia, Intel, and AMD.

PyFR 1.0.0 has a hard dependency on Python 3.3+ and the following Python packages:

h5py >= 2.5
mako >= 1.0.0
mpi4py >= 1.3
mpmath >= 0.18
numpy >= 1.8
pytools >= 2014.3
Note that due to a bug in numpy PyFR is not compatible with 32-bit Python distributions.

CUDA Backend
The CUDA backend targets NVIDIA GPUs with a compute capability of 2.0 or greater. The backend requires:

CUDA >= 4.2
pycuda >= 2011.2
OpenCL Backend
The OpenCL backend targets a range of accelerators including GPUs from AMD and NVIDIA. The backend requires:

pyopencl >= 2013.2
OpenMP Backend
The OpenMP backend targets multi-core CPUs. The backend requires:

GCC >= 4.7
A BLAS library compiled as a shared library (e.g. OpenBLAS)
Running in Parallel
To partition meshes for running in parallel it is also necessary to have one of the following partitioners installed:

metis >= 5.0
scotch >= 6.0

davidlowryduda.com: mixedmath — Explorations in math and number theory

davidlowryduda.com: mixedmath — Explorations in math and number theory

I’m David Lowry-Duda. I’m currently a 3rd year PhD student at Brown University. At Brown, I study mathematics. More specifically, I study number theory. As an undergrad, I really enjoyed elementary and additive number theory. As a grad student studying under Dr. Jeff Hoffstein, I focus on analytic number theory. What is number theory? I get this question a lot, but I’ve never been really good at answering it. When I took my first number theory class with Dr. Matt Baker, an excellent inspirateur likely responsible for my career path, it was apparent to me: number theory is the study of numbers. We like divisibility tests, primes, or density of special numbers in progression. This encompasses much of what I now call elementary number theory, from the prime number theorem to modern cryptography. But this does not even begin to actually answer the question (this is sort of a Dunning-Kruger classification error).

Welcome to Arb’s documentation! – Arb 2.3.0 documentation

Welcome to Arb’s documentation! – Arb 2.3.0 documentation
Arb is a C library for arbitrary-precision floating-point ball arithmetic, developed by Fredrik Johansson (fredrik.johansson@gmail.com). It supports efficient high-precision computation with polynomials, power series, matrices and special functions over the real and complex numbers, with automatic, rigorous error control. The git repository is https://github.com/fredrik-johansson/arb/

FLINT: Fast Library for Number Theory

FLINT: Fast Library for Number Theory
FLINT is a C library for doing number theory, maintained by William Hart. FLINT is licensed GPL v2+. FLINT supports arithmetic with numbers, polynomials, power series and matrices over many base rings, including: Multiprecision integers and rationals Integers modulo n p-adic numbers Finite fields (prime and non-prime order) Real and complex numbers (via the Arb extension library) Support is also currently being developed for algebraic number fields (via the Antic extension library). Operations that can be performed include conversions, arithmetic, computing GCDs, factoring, solving linear systems, and evaluating special functions. In addition, FLINT provides various low-level routines for fast arithmetic. FLINT is extensively documented and tested.

MathJax: MathJax is an open source JavaScript display engine for mathematics that works in all browsers

MathJax: MathJax is an open source JavaScript display engine for mathematics that works in all browsers
MathJax is a project of the MathJax Consortium, a joint venture of the American Mathematical Society (AMS) and the Society for Industrial and Applied Mathematics (SIAM) to advance mathematical and scientific content on the web. MathJax is generously supported by the MathJax Sponsors. The core of the MathJax project is the development of its state-of-the-art, open source, JavaScript platform for display of mathematics. Our key design goals are high-quality display of mathematics notation in all browsers no special browser setup required support for LaTeX, MathML and other equation markup directly in the HTML source. an extensible, modular design with a rich API for easy integration into web applications. support for accessibility, copy and paste and other rich functionality interoperability with other applications and math-aware search.