A distributed database for many core devices A distributed database for many core devices

GPUdb is a scalable, distributed database with SQL-style query capability, capable of storing Big Data. Developers using the GPUdb API add data, and query the data with operations like select, group by, and join.

The GPUdb cluster can be run in a number of configurations. GPUdb clusters can range from being a single GPUdb node to a large cluster of machines. GPUdb does not require you to provide pre-determined key-sharding schemes in order to scale your data across several nodes.

GPUdb can run on as little as a single laptop or several rooms full of networked servers. GPUdb performs best when it has access to highly dense many core devices like NVidia GPUs or Intel MiC cards, but those are not required.

Leveraging many-core devices is the central theme of GPUdb. Currently GPUdb supports NVIDIA GPUs and Xeon Phi many-core devices. We plan on supporting all OpenCL capable many-core devices but have not yet added this to our internal development roadmap. GPUdb will also work with any traditional x86 CPUs and attempt to take advantage of all available cores, but this is not its primary performance use case.

11th International Conference on Parallel Processing and Applied Mathematics in Krakow

11th International Conference on Parallel Processing and Applied Mathematics in Krakow

September 6-9, 2015, Krakow , Poland

WLPP 2015 is a full-day workshop to be held at the PPAM 2015 focusing on high level programming for large-scale parallel systems and multicore processors, with special emphasis on component architectures and models. Its goal is to bring together researchers working in the areas of applications, computational models, language design, compilers, system architecture, and programming tools to discuss new developments in programming Clouds and parallel systems. The workshop focuses on any language-based programming model such as OpenMP, Intel TBB and Ct, Microsoft .NET 4.0 parallel extensions (TPL and PPL), Java parallel extensions, HPCS languages (Chapel, X10 and Fortress), Unified Parallel C (UPC), Co-Array FORTRAN (CAF) and GPGPU language-based programming models such as CUDA. Contributions on other high-level programming models and supportive environments for parallel and distributed systems are equally welcome. CSC Home / Center for Scientific Computing CSC Home / Center for Scientific Computing
The Center for Scientific Computing (CSC) of the Goethe University Frankfurt currently operates three Linux-based computer clusters within the framework of the HHLR-GU (Hessisches Hochleistungsrechenzentrum der Goethe-Universität) to support numerically intensive studies in a variety of research fields, ranging from neuroscience to high-energy physics. The CPU cluster “Fuchs” is available for HPC (High Performance Computing) applications for users from all universities in Hessen. As the system is designed to support different types of applications the cluster provides an ideal HPC-infrastructure for the scientific community. The GPGPU cluster “Scout” is a testbed for users who want to develop or port code to run on modern architecture graphics processors. Recently, the massive parallel cluster “LOEWE-CSC”, a combined CPU-GPU cluster, was installed. GPUs Further Russia’s Supercomputing Efforts, Accelerate Its Fastest System GPUs Further Russia’s Supercomputing Efforts, Accelerate Its Fastest System
Overall, GPUs power three of the top 10 and nearly one-third of the top 50 systems on the list, which is issued twice yearly. It’s a remarkable stat considering that no GPU-accelerated systems were on the list just three years ago. Moscow State University’s Lomonosov supercomputer glows green with NVIDIA Tesla GPUs inside. Thanks to an upgrade with NVIDIA Tesla GPUs in 2011, Lomonosov claims its spot easily. It delivers 1.7 petaflops of peak performance, making it the fastest accelerator-based supercomputer not just in Russia but in all of Europe.

Big Red II Cray Supercomputer at Indiana University

Big Red II Cray Supercomputer at Indiana University
With a peak performance of 1 petaflop, Big Red II offers users a hybrid compute environment consisting of: CPU compute nodes: Featuring two AMD Abu Dhabi X86-64 processors with 16 cores each, these nodes provide 32 cores and 64 GB of memory per node. GPU-enabled compute nodes: Containing one AMD Interlagos X86-64 processor and one NVIDIA Kepler K20 GPU, these nodes provide 32 GB of memory. All compute nodes are connected through the Cray Gemini interconnect.


I started AnandTech as a hobby on April 26, 1997. Back then it was called Anand’s Hardware Tech Page, and it was hosted on a now-defunct free hosting service called Geocities. I was 14 at the time and simply wanted to share what I knew, which admittedly wasn’t much, with others on the web. In those days PCs were very expensive and you could often save a good amount of money buying components and building your own. We have our roots in reviewing PC components and technologies.