AWS Partner Network (APN) Blog: High-Performance Mainframe Workloads on AWS with Cloud-Native Heirloom

AWS Partner Network (APN) Blog: High-Performance Mainframe Workloads on AWS with Cloud-Native Heirloom

Heirloom Computing_Badge
Connect with Heirloom-1
Rate Heirloom-1
By Gary Crook, CEO at Heirloom Computing

It is common to meet enterprises still using mainframes because that is historically where their core business applications have been. With Heirloom on AWS, we can decouple the application from the physical mainframe hardware, allowing us to run applications in the cloud and take advantage of the benefits of Amazon Web Services (AWS).

Heirloom automatically refactors mainframe applications’ code, data, job control definitions, user interfaces, and security rules to a cloud-native platform on AWS. Using an industry-standard TPC-C benchmark, we demonstrated the elasticity of Heirloom on AWS, delivering 1,018 transactions per second—equivalent to the processing capacity of a large 15,200 MIPS Mainframe.

Heirloom is deployed on AWS Elastic Beanstalk, which facilitates elasticity (the ability to automatically scale-out and scale-back), high availability (always accessible from anywhere at any time), and pay-as-you-go (right-sized capacity for efficient resource utilization). With Heirloom on AWS, mainframe applications that were rigid and scaled vertically can be quickly and easily transformed (recompiled) so they are now agile and scaled horizontally.

Heirloom Computing is an AWS Partner Network (APN) Standard Technology Partner. In this post, we use a real-world example to outline how monolithic mainframe applications can automatically be refactored to agile cloud-native applications on AWS.

Heirloom Overview
At the core of Heirloom is a unique compiler that quickly and accurately recompiles online and Batch mainframe applications (COBOL, PL/I, JCL, etc.) into Java so they can be deployed to any industry standard Java application server, such as Apache Tomcat, Eclipse Jetty, or IBM WebSphere Liberty.

With Heirloom, once the application is deployed it retains both the original application source code and resulting refactored Java source code. Heirloom includes Eclipse IDE plugins for COBOL, PL/I, and Java, as well as a fully functional integrated JES console and subsystem for running JCL jobs. This enables a blended model for ongoing development and management of the application so you can bridge the skills gap at a pace that is optimal for you, and switch code maintenance from COBOL to Java at your convenience.

Heirloom Computing – 1

Figure 1 – Heirloom refactoring reference architecture for mainframes.

Elastic Architecture
Heirloom deploys applications to industry standard Java application servers, which means your application can instantly leverage the full capabilities of AWS. Applications can dynamically scale-out and scale-back for optimal use of compute resources, and seamlessly integrate with additional AWS managed services like AWS Lambda and Java application frameworks like Angular2.

Here’s an example that uses Amazon Alexa to interact with unmodified CICS transactions deployed as microservices, and here’s another example that utilizes Docker containers.

Heirloom Computing – 2

Figure 2 – Heirloom elastic architecture for high performance.

The Heirloom elastic architecture relies on stateless share-nothing application servers that scale horizontally across Availability Zones (AZs). Any shared or persistent data structure is stored in an elastic managed data store. On AWS, this horizontal architecture across several AZs and many instances is key for elasticity, scalability, availability, and cost optimization. AWS Elastic Beanstalk automatically handles the application deployment, from capacity provisioning, load balancing, and auto-scaling to application health monitoring.

Application artifacts that are not inherently scalable are refactored to a target that automatically removes that constraint. For example, file-based data stores such as VSAM files are migrated to a relational data store using Heirloom built-in data abstraction layers. This is achieved without requiring any changes to the application code.

Performance Results
Using a COBOL/CICS implementation of the industry standard TPC-C benchmark, we measured transaction throughput per MIPS by running the application on a mainframe with a known MIPS specification. We then ran the same application on AWS infrastructure to measure transaction throughput and derive a MIPS rating using the ratio from running the application on the mainframe. Consequently, we determined the AWS environment was able to consistently deliver an equivalent MIPS rating of 15,200 at a sustained transaction throughput of 1,018 transactions per second.

For the performance test on AWS, we took more than 50,000 lines of the TPC-C application code and screens and compiled them (without any modifications) into 100 percent Java using the Heirloom Software Developer Kit (SDK) Eclipse plugin. The Java code was then packaged as a standard .war file, ready for deployment to any industry standard Java application server (such as Apache Tomcat).

The TPC-C environment on AWS is composed of:

22,500 simulated end-user terminals hosted by 10 m3.2xlarge Amazon Elastic Compute Cloud (Amazon EC2) instances.
All concurrent transactions are distributed to application instances by a single AWS Application Load Balancer which automatically scales on-demand.
The Heirloom TPC-C application layer is hosted in an AWS Elastic Beanstalk environment consisting of a minimum of 16 m4.xlarge Amazon EC2 instances (a Linux environment running Apache Tomcat). With the AWS Auto Scaling Group, this environment automatically scales-out (by increments of 8), up to a maximum of 64 instances (depending on a metric of the average CPU utilization of the currently active instances). It also automatically scales-back when the load on the system decreases.
For enhanced reliability and availability, the instances are seamlessly distributed across three different AZs (i.e. at least three different physical data centers).
The database (consisting of around five millions of rows of data in tables for districts, warehouses, items, stock-levels, new-orders, etc.) is hosted in an Amazon Aurora database (which is either MySQL or PostgreSQL compatible).
The application monitoring layer is provided by Amazon CloudWatch, which provides a centralized constant examination of the application instances and the AWS resources being utilized.
The application workload at peak was distributed over a total of 144 CPU cores (each CPU is equivalent to 2 vCPUs), consisting of 128 CPU cores for the application layer and 16 CPU cores for the Aurora database layer. For a 15,200 MIPS capacity, this yields approximately 105 MIPS per CPU core (or 52 MIPS per vCPU). This is consistent with our client engagements, and a useful rule of thumb when looking at initial capacity planning.

Cost Analysis
For a large mainframe of more than 11,000 MIPS, the average annual cost per installed MIPS is about $1,600. Hardware and software accounts for 65 percent of this, or approximately $1,040. Consequently, we determined the annual infrastructure cost for a 15,200 MIPS mainframe is approximately $16 million.

On the AWS side, using the AWS Simple Monthly Calculator to configure a similar infrastructure to the performance test, we estimated the annual cost to be around $350,000 ($29,000 monthly). This AWS cost could be further optimized with Amazon EC2 Reserved Instances.

The annual cost for Heirloom varies depending on the size of the application being refactored (it consists of a cost per CPU core with larger discounts for larger application workloads). With all costs accounted for, our clients typically see a cost reduction in excess of 90 percent, and positive ROI within a year.

Code Quality
With any solution that takes you from one ecosystem to another, not only is it essential the application behaves and functions as it did before, it’s vital that any application code produced is of the highest quality.

Using SonarQube with the default configuration, we can examine the quality of the Java application produced by Heirloom when it compiled the 50,000+ LOC in the TPC-C application (which was originally a COBOL/CICS application written for the mainframe). SonarQube rated all the major aspects (reliability, security, and maintainability) of the Java application source-code with the highest rating of “A.”

Heirloom Computing – 3

Figure 3 – SonarQube analysis for the TPC-C benchmark application refactored with Heirloom.

Refactoring Tools and Application Development
The Heirloom SDK is a plugin for the Eclipse IDE framework and provides tooling that covers all aspects of a refactoring project, as outlined in Figure 1.

The same tooling is then used for ongoing application development and maintenance. This can be done in COBOL, PL/I, Java or any mix. Each language is fully supported with a feature-rich project workspace, editor, compiler, and debugger.

Regardless of which language you choose, Heirloom applications always execute on the Java platform.

Heirloom Computing – 4

Figure 4 – Eclipse IDE with Heirloom SDK showing a COBOL debugging and editing session.

Not Just Cloud, But Cloud-Native
You can move an application to the cloud by re-hosting it (also called “lift and shift”) on an Amazon EC2 instance, retaining the existing constraints of the legacy workload such as stateful applications, monolithic application structure, file-based data stores, and vertical scaling. It works with limitations, but it is not cloud-native.

In simple terms, cloud-native is an approach to the development and deployment of applications that fully exploits the benefits of the cloud-computing platform. There are best practices for cloud-native applications, such as:

Adherence to the Twelve-Factor App methodology, including stateless share-nothing processes and persistent or shared data stored in backend databases.
Dynamic, horizontal scale-out and scale-back.
Available to be consumed as a service.
Enables portability between execution environments in order to select fit-for-purpose compute or data store services.
Leverage cloud-provided system management such as central monitoring, central logging, central alerting, central automation, central billing.
Cloud-native applications are elastic, highly-scalable, and embrace the elasticity of underlying AWS services. With horizontal scalability, cloud-native applications are also more cost optimized because you don’t need to size a fixed number of instances for peak traffic. Instead, you can use smaller instances which are instantiated or terminated automatically based on the exact workload demand.

The major benefit of Heirloom on AWS is that it can automatically refactor mainframe applications to Java application servers so they are cloud-native, while preserving critical business logic, user-interfaces, data integrity, and systems security.

Learn More About Heirloom on AWS
See Heirloom in action with our 60-second demo. You can also try it by downloading Heirloom SDK for free—available on Windows, Linux, and macOS.
You can read an in-depth look at how the performance benchmark on AWS was performed in my LinkedIn article.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Heirloom Computing_Logo-2
Heirloom Computing – APN Partner Spotlight
Heirloom Computing is an APN Standard Technology Partner. Heirloom automatically transforms mainframe applications so they execute on Java Application Servers, while preserving critical business logic, user-interfaces, data integrity, and systems security. With Heirloom , mainframe applications can quickly and easily be transformed to be agile, open, and scaled horizontally.

Contact Heirloom Computing | Solution Brief | Solution Demo | Free Trial

*Already worked with Heirloom Computing? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

TAGS: Amazon Alexa, Amazon EC2, APN Partner Guest Post, APN Partner Spotlight, APN Technology Partners, Application Load Balancer, Auto Scaling, AWS Elastic Beanstalk, AWS Lambda, AWS Partner Solutions Architects (SA), Docker, Heirloom Computing, Mainframe Modernization, Migration

BluAge legacy modernization

BluAge legacy modernization

Contact us
Conference registration
Request a demo
Partners portal
Blu Age
About Us

Refactoring, replatforming,
rewriting, repurchasing….
Compare the main legacy modernization techniques

Application Modernization Service Quadrant
Blu Age rated as innovator by 360Quadrants under Application Modernisation space – September 2019
Blu Age evolves from innovator to visionary leaders in Apps Transformation
Blu Age ahead of the pack on product maturity
Blu Age in the top Apps Transformation players
“Blu Age’s solutions also allow users to exercise complete control over project duration and to slice and decompose monolithic applications into independent components…”
“Blu Age has developed a portal that showcases its product offerings and allows partners to access technical and marketing resources…”

Our mission: Help private and public organizations to enter the digital era by modernizing their legacy systems while substantially reducing modernization costs, shortening projects duration and mitigating the risk of failure.

Blu Age modernization in numbers
Faster modernization projects Blu Genius for FREE
Up to 20 Times Faster
Shorten your Cloud migration from months to weeks or days.

Cheaper modernization projects
4 to 10 Times Cheaper
Automation minimizes the workload for experienced developers.

Successful modernization projects
100% Success
All technical risks are identified, measured and addressed at early stage.

Why use Blu Age ?
Proven Solutions
Blu Age products have been successfully used in more than 80 projects.

Faster Projects
The high degree of automation substantially reduces projects duration.

Minimized Risks
100% of the potential risks are identified, mitigated and controlled at early stage.

100% Automated
Blu Age fully automates the modernization of legacy code and data into modern technologies.

High Quality Code
Blu Age generates modern source code ready for future evolutions and maintenance.

Reduced Budgets
Blu Age guarantees highly competitive financial offers in the modernization landscape.

Blu Age accelerates your modernization into the world’s top Cloud providers
Digital technologies such as Cloud, mobile, and smart assistants are creating a world of new opportunities for businesses today, helping companies save millions, creating efficiencies, improving quality, and increasing product visibility. Unfortunately, legacy systems seldom support digital technologies and when coupled with the numbers of skilled workers retiring, legacy systems are becoming increasingly risky and costly to operate-making legacy modernization mission-critical.

Experienced by
If you are looking for Mainframe COBOL, PL/I, Natural Adabas, CA Ideal Datacom, PowerBuilder, RPG 400, Delphi, CoolGen, Visual Basic or any other legacy language modernization to the Cloud, Blu Age gets you there with speed, scale and safety.

Blu Age News
Stay up to date with the latest tips in modernization to Cloud technology provided by
our engineering team at Blu Age.

Blog post #4: Know the modernization market
1 min read – words: 515

Blu Insights

Blog post #3: KPIs to be reached
5 min read – words: 1536

Blu Insights

Blog post #2: Talk to your stakeholders
4 min read – words: 1186

Blu Insights
More News…
Blu Age

Powerful legacy applications modernization solutions allowing faster projects with lower budgets.

Blu Age

Everything your teams and customers need to pave the way to success.

Blu Insights – Modernization Factory
Blu Age Analyzer
Blu Age Velocity
Blu Age Classic
Serverless Cobol
Blu Age Compare Tool
Summerbatch for .NET
Conference registration
Download Eclipse 2020-03
Welcome to the jungle

Index de l’égalité professionnelle homme femme : 84/100.

© 2005-2020 BLU AGE


Heirloom Computing: Replatform Mainframe Applications as Cloud-Native Java ApplicationsHeirloom Computing

Heirloom Computing: Replatform Mainframe Applications as Cloud-Native Java ApplicationsHeirloom Computing

Heirloom Computing
The Cloud-Native Mainframe™
Heirloom® is the only proven cloud-native mainframe replatforming solution in the market.
Read this Amazon AWS blog that illustrates why leading financial services companies and government agencies are embracing Heirloom to replatform their mainframe workloads as agile cloud-native applications.

Digital Transformation of Mainframe Applications
Heirloom® automatically replatforms mainframe applications so they execute on any Cloud, while preserving critical business logic, user-interfaces, data integrity, and systems security.
Replatforming with Heirloom is 10X faster than re-engineering, cutting operational costs up to 90%, with a typical ROI well inside 12 months.

Open, Powerful, Flexible
Heirloom® applications execute on any industry-standard Java Application Server, removing any dependency on proprietary containers, and therefore eliminating vendor lock-in.
With state-of-the-art Eclipse plugins, application development can continue in the original language (e.g. COBOL, PL/I, JCL) or Java, or any mix. This unprecedented flexibility means you can move fast with a blended model that makes best use of your technical resources.

Agile Cloud-Native Applications
Replatforming with Heirloom® delivers agile cloud-native applications that can be deployed on-premise or to any cloud. Applications can dynamically scale-out & scale-back, with high-availability, and cost-effective utilization of resources.
Seamlessly integrate & extend Heirloom applications with powerful open source application frameworks to quickly add new functions, re-factor code, and construct microservices.

Heirloom® Overview
Heirloom is a state-of-the-art software solution for replatforming online & batch mainframe workloads as 100% cloud-native Java applications.

It is a fast & accurate compiler-based approach that delivers strategic value through creation of modern agile applications using an open industry-standard deployment model that is cloud-native.

How It Works In 60 Seconds
Think you’ve heard it before? Got 60 seconds? Let us show you the Heirloom difference; watch the video below.

Try It For Yourself
You can download the Heirloom SDK via our courseware for free today. It is available on Windows, Linux, and macOS.

COBOL Compiler

Fast & Accurate
The core technology of Heirloom is a patented compiler that can recompile & refactor very large complex mainframe applications built from millions of lines of code into Java in minutes. The resulting application is guaranteed to exactly match the function & behavior of the original application.

Complete Solution
Mainframe applications are dependent upon key subsystems such as transaction processors, job control, file handlers, and resource-level security & authentication. Heirloom faithfully replicates all of these major subsystems by providing a Java equivalent (for example, JES/JCL) or a layer that provides a seamless mapping to an open systems equivalent (for example, Open LDAP for security).

Built for Cloud
Heirloom was designed and built for the cloud from the the start. Cloud-native deployment delivers application elasticity (the ability to dynamically scale-out and scale-back), high availability (always accessible from anywhere at anytime), and pay-for-use (dynamic right-sizing of capacity for efficient resource utilization).
Heirloom Computing works with industry-leading systems integrators to offer complete application modernization & PaaS enablement solutions to enterprises and ISVs.

Interested in partnering with us?

Mainframe Replatforming
Solution Brief
Courseware & SDK
Product Support
Product Manuals
White Papers
Financial Services
Terms of Service
Privacy Policy
Contact Us
© 2010-2020
Heirloom Computing Inc
All Rights Reserved

BigBlueButton Online Learning

BigBlueButton Online Learning

Build Upon Us
BigBlueButton is completely open source and made by a community of dedicated developers passionate about helping improve online learning.

How we started
BigBlueButton is an open source web conferencing system for online learning. The goal of the project is to provide remote students a high-quality online learning experience.

Given this goal, it shouldn’t surprise you that BigBlueButton started at a university (though it may surprise you which one). BigBlueButton was created by a group of very determined software developers who believe strongly in the project’s social benefits and entrepreneurial opportunities. Starting an open source project is easy: it takes about five minutes to create a GitHub account. Building a successful open source project that solves the complex challenges synchronous learning, however, takes a bit longer.

There has been a core group of commiters working on BigBlueButton since 2007. To date there have been been over a dozen releases of the core product. For each release the committers have been involved in developing new features, refactoring code, supporting the community, testing and documentation.

We believe that focus is important to the success of any open source project. We are focused on one market: on-line learning. If you are an institution (educational or commercial) that wishes to teach your students in a synchronous environment, we are building BigBlueButton for you.

Interested in trying BigBlueButton
Check out the tutorial videos and then try out BigBlueButton on our demo server.

For Teachers
For Schools
Open Source Project
Developer Group
Tutorial Videos
Community Support
Commercial Support

© 2020 BigBlueButton. Data Processing and Storage for Black Hole Event Horizon Imaging Data Processing and Storage for Black Hole Event Horizon Imaging

White Paper
Data Processing and Storage for
Black Hole Event Horizon Imaging
Tom Coughlin, President, Coughlin Associates, Inc.
Executive Summary
The first imaging of the event horizon for a black hole involved an international
partnership of 8 radio telescopes with major data processing at MIT and the Max Planck
Institute (MPI) in Germany. The contribution of the brilliant scientists was aided by the
expanded computing power of today’s IT infrastructure. Processing of the 4 petabytes
(PB) of data generated in the project in 2017 for the original imaging utilized servers and
storage systems, with many of these servers coming from Supermicro. Figure 1 shows the
MIT Correlator Cluster as it looked in 2016.
Super Micro Computer, Inc.
980 Rock Avenue
San Jose, CA 95131 USA
Table of Contents
1 Executive Summary
2 Black Holes and their Event Horizons
2 The Black Hole Event
Horizon Imaging Project
4 Capturing and Storing EHT Data
4 The EHT Correlators
5 Supermicro in the EHT Correlators
8 Processing the EHT Data
9 The Future of Black Hole
10 Infrastructure Possibilities of
Future Black Hole Observations
12 About the Author
12 About Supermicro
June 2019
2 Data Processing and Storage for Black Hole Event Horizon Imaging
1 Titus et. al., Haystack Observatory VLBI Correlator 2015–2016 Biennial Report, in International VLBI Service for
Geodesy and Astrometry 2015+2016 Biennial Report, edited by K. D. Baver, D. Behrend, and K. L. Armstrong,
NASA/TP-2017-219021, 2017
Black Holes and their Event Horizons
With the publication of Albert Einstein’s general theory of relativity in 1915, our views of
the nature of space, time and gravity were changed forever. Einstein’s theory showed that
gravity is created by the curvature of space around massive objects.
When the nuclear fusion processes that create the radiation of stars begins to run out of
fuel and no longer exert sufficient outward pressure to overcome the gravity of the star,
the star’s core collapses. For a low mass star like our Sun, this collapse results in a white
dwarf star that eventually cools and becomes a nearly invisible black dwarf. Neutron stars
are the result of the gravitational collapse of the cores of larger stars, which just before
their collapse, blow much of their mass away in a type of supernova. Even larger stars,
more than double the mass of our sun, are so massive that when their nuclear fuel is
expended, they continue to collapse, with no other force able to resist their gravity, until
they form a singularity (or point-like region) in spacetime. For these objects, the escape
velocity exceeds the speed of light and hence conventional radiation cannot occur and a
black hole is formed.
Figure 1. Detail of DiFX Correlator cluster racks showing Supermicro servers1
Data Processing and Storage for
Black Hole Event Horizon Imaging
White Paper 3
The black hole singularity is surrounded by a region of space that is called the event
horizon. The event horizon has a strong but finite curvature. Radiation can be emitted
from the event horizon by material falling into the black hole or by quantum mechanical
processes that allow a particle/antiparticle pair to effectively tunnel out of the black role
releasing radiation (Hawking radiation).
Black holes are believed to be fairly common in our universe and can occur in many
sizes, depending upon their initial mass. Hawking radiation eventually results in the
“evaporation” of a black hole, and smaller, less massive, black holes evaporate faster than
more massive black holes. Super massive, long-lived black holes are believed to be at the
center of most galaxies, including our Milky Way. Until 2019, no one had imaged the space
around a black hole.
The Black Hole Event Horizon Imaging Project
Radio telescopes allow observations even in the presence of a cloud cover and microwave
radiation is not absorbed by interstellar clouds of dust. For these reasons, radio telescopes
provide more reliable imaging of many celestial objects. 1.3mm wavelength is a popular
observing frequency.
Using an array of 8 international radio telescopes in 2017, astrophysicists used
sophisticated signal processing algorithms and global very long baseline interferometry
(VLBI) to turn 4 petabytes of data obtained from observations of a black hole in a
neighboring galaxy into the first image of a black hole event horizon. This particular
black hole is located 55 million light-years away (in galaxy Messier 87, M87). It is 3 million
times the size of the Earth, with a mass 6.5 billion times that of the Earth’s sun. A version
of the images shown in Figure 2, showing a glowing orange ring created by gases
and dust falling into the black hole, appeared on the front pages of newspapers and
magazines worldwide in mid-April 2019.
The eight different observatories,
located in six distinct geographical
locations, shown in Figure 3,
formed the Event Horizon
Telescope (EHT) array. This
collection of telescopes
provided an effective imaging
aperture close to the diameter
of the earth, allowing the
resolution of very small objects
in the sky. Observations were
made simultaneously at 1.3mm
wavelength with hydrogen maser
atomic clocks used to precisely time
stamp the raw image data.
Figure 2. First M87 Event Horizon
Telescope Results. III. Data
Processing and Calibration,
The Event Horizon Telescope
Collaboration, The Astrophysical Journal Letters, 875:L3
(32pp), 2019, April 10
M87* April 11, 2017
50 μas
April 5
0 1 2 3 4 5 6
April 6
Brightness Temperature (10 K)
April 10
Figure 3. Akiyama et. al., First M87 Event Horizon Telescope Results. III. Data Processing and
Calibration, The Event Horizon Telescope Collaboration, The Astrophysical Journal
Letters, 875:L3 (32pp), 2019, April 10
4 Data Processing and Storage for Black Hole Event Horizon Imaging
Capturing and Storing EHT Data
VLBI allowed the EHT to achieve an angular resolution of 20 micro-arcseconds,
said to be good enough to locate an orange on the surface of the Moon, from the
Earth. Observation data was collected over five nights, from April 5–11, 2017. These
observations were made at each site as the weather conditions were favorable. Each
telescope generated about 350 TB of data a day and the EHT sites recorded their data
at 64 Gb/s.
The data was recorded in parallel by four digital backend (DBE) systems on 32
helium-filled hard disk drives (HDDs), or 128 HDDs per telescope. So, for the 8
telescopes, 1,024 HDDs were used. The Western Digital helium filled HDDs used were
first obtained in 2015, when these were the only He-filled sealed HDDs available.
Sealed He-filled HDDs were found to operate most reliably at the high altitudes of
the radio telescopes. Processing of the collected data was simultaneously done using
correlators at the MIT Haystack Observatory (MIT) and the Max Planck Institute in
Germany (MPI).
According to Helge Rottmann from MPI2
“For the 2017 observations the total
data volume collected was about 4 PB.” He also said that starting in 2018 the EHT
collected data doubled to 8 PB. The DBEs acquired data from the upstream detection
equipment using two 10 Gb/s Ethernet network interface cards at 128 Gb/s. Data
was written using a time sliced round-robin algorithm across the 32 HDDs. The drives
were mounted in groups of eight in four removable modules. After the data was
collected the HDD modules were flown to the Max Planck Institute (MPI) for Radio
Astronomy in Bonn, Germany for high frequency band data analysis and to the
MIT Haystack Observatory in Westford, Massachusetts for low frequency band data
Vincent Fish from the MIT Haystack Observatory said that3
, “It has traditionally been
too expensive to keep the raw data, so the disks get erased and sent out again for
recording. This could change as disk prices continue to come down. We still have
the 2017 data on disk in case we find a compelling reason to re-correlate it, but in
general, once you’ve correlated the data correctly, there isn’t much need to keep
petabytes of raw data around anymore.”
The EHT Correlators
The real key to extracting the amazing images of the event horizon of a black hole
was the use of advanced signal processing algorithms to process the data. Through
the receiver and backend electronics at each telescope, the sky signal is mixed to the
baseband, digitized, and recorded directly to hard disk, resulting in petabytes of raw
VLBI voltage signal data. The correlator uses an a priori Earth geometry and a clock/
delay model to align the signals from each telescope to a common time reference.
Also, the sensitivity of the antennas had to be calculated to create a correlation
coefficient between the different antennas.
2 Email from Helge Rottmann, Max Planck Institute, May 7, 2019
3 Email from Vincent Fish, MIT Haystack Observatory, April 30, 2019
Data Processing and Storage for
Black Hole Event Horizon Imaging
White Paper 5
The actual processing was performed with DiFX software4
running on high performance
computing clusters at MPI and MIT. The clusters are composed of 100s of servers,
thousands of cores, high performance networking (25 GbE and FDR Infiniband) and
RAID storage servers. The MIT Haystack correlator is shown in Figure 5.
The MPI Cluster, located in Bonn Germany, is comprised of 3 Supermicro head node servers,
68 Compute nodes (20 cores each = 1,360 cores), 11 Supermicro storage RAID servers
running BeeGFS parallel file system with a capacity of 1.6 petabytes, FDR Infiniband
networking, 15 Mark 5 playback units, as shown in figure 5, and 9 Mark 6 play back units.
The MIT Cluster is housed within 10 racks (9 visible
in Figure 4). Three generations of Supermicro
servers were used with the newest having two
10-core Intel®
CPUs. The network consists
of Mellanox®
100/50/40/25 GbE switches with the
majority of nodes on the high speed network at
25GbE or higher Mellanox PCIe add-on NICs. In
addition to the Mark 6 recorders there is half a
petabyte of storage scattered across the various
Supermicro storage servers for staging raw data
and archiving correlated data product1.
Supermicro in the EHT Correlators
The Correlators make extensive use of Supermicro’s large portfolio of Intel Xeon
Processor based systems and Building Block Solutions®
to deploy fully optimized
compute and storage solutions for various workloads. For example, the Haystack DiFX
Correlator depicted in Figure 6 leverages Supermicro solutions for compute, storage,
administration, and maintenance tasks.
Figure 4. Photo by Nancy Wolfe Kotary, MIT Haystack Observatory
Figure 5. Mark 5 Playback Unit
4 Deller, A. T., Brisken, W. F., Phillips, C. J., et al. 2011, PASP, 123, 275
6 Data Processing and Storage for Black Hole Event Horizon Imaging
Haystack DiFX Correlator
Due to the high resource demand of DiFX, 2×2 clustered Supermicro headend nodes
(4 total), shown in Figure 7, are required to play the role of launching the correlations;
farming out the pieces of the correlations to the compute nodes; collecting and
combining the processed correlation pieces and writing out the correlated data products.
These 4U 2 processor systems with 24 3.5″ drive headend nodes utilize the onboard
hardware SAS RAID controller to achieve high output data rates and data protection.
Figure 6. Haystack DiFX Correlator
Figure 7. Supermicro 4U Headend Node Systems
Storage Head Nodes
2X2 Clustered
Compute Cluster
Mark 6 Unit
Raid Storage
Raid Storage Super Storage
4U 2P 24X Drive
3U 2P 8 Drive, 7 PCI-E
2U 2 Node 2P
Data Processing and Storage for
Black Hole Event Horizon Imaging
White Paper 7
There are a total of 60 compute nodes that comprise the MIT cluster. There are 38 nodes
of the Supermicro TwinPro multi-node systems (19x systems total), shown in Figure 8 with
Intel Xeon E5-2640 v4 processors. The Twin multi-node system contains two independent
dual processor compute nodes in a single system doubling the density from traditional
rackmount systems and with shared power and cooling for improved power efficiency
and serviceability.
There are 16 previous generation compute cluster nodes in the cluster comprised of a 3U
dual socket Supermicro server with the Intel Xeon E5-2680 v2 processors, Figure 9.
Figure 8. Supermicro TwinPro Multi Node Systems
Figure 9. Supermicro SuperServer
8 Data Processing and Storage for Black Hole Event Horizon Imaging
The clustered storage nodes are configured with redundant high efficiency power
supplies and optimized redundant cooling to save energy, SAS3 expander options for
ease of interconnection, plus a variety of drive bay options depending on task.
At the core of the MIT DiFX Correlator is a high-performance data storage cluster based
on four Supermicro storage systems, to deliver high I/O throughput and data availability
through 10 Gigabit Ethernet networking fabrics and RAID controllers.
These systems are built with a selection of different Supermicro serverboards and
chassis with support for dual or single Intel®
processors, SAS3 drives with onboard
hardware RAID controllers, onboard dual 10GbE for efficient networking, up to 2 TB DDR4
memory and 7 PCI-E 3.0 expansion slots for external drive capabilities.
Processing the EHT Data
According to Vincent Fish2, “The time for computation is in general a complicated function
a lot of different parameters (not just the number of stations or baselines being correlated).
In general, we can correlate the data from one 2 GHz chunk of EHT data a bit slower than real
time. However, the telescopes don’t record continuously—there are gaps between scans on
different sources, and an observing night doesn’t last 24 hours—so we could correlate a
day’s worth of data in about a day per 2 GHz band if we ran the correlator continuously.
The 2017 data consisted of two 2 GHz bands, one of which was correlated at Haystack
and the other in parallel at MPIfR (MPI). The 2018 data consists of four 2 GHz bands; each
correlator is responsible for two of them.”
The Mark 6 playback units, figure 10, at the MIT correlator are connected via 40 Gbps
data links. A 100 Gbps network switch then delivers data to the processing nodes using
25 Gbps links. At MPI the internode communication, which includes the Mark 6 playback
units, is realized via 56 Gbps connections, exceeding the maximum playback rate of the
Mark 6 units of 16 Gbps3.
The average time and bandwidth in the correlators are set to ensure that any coherence
losses due to delay or rate variations are negligible, or equivalently that such variations
can be tracked both in time and frequency.
The processing was divided between the two sites and included crosschecking of results.
The supercomputers at MPI and MIT correlated and processed the raw radio telescope
data from the various observing sites.
After the initial correlation, the data are further processed through a pipeline that results
in final data products for use in imaging, time-domain analyses, and modeling.
Data were correlated with an accumulation period (AP) of 0.4 s and a frequency resolution
of 0.5 MHz.
Note that ALMA refers to the Atacama Large Millimeter/submillimeter Array (In Chile).
ALMA was configured as a phased array of radio telescopes and was a recent addition to
the EHT effort with significant resolution capability. ALMA was treated as a highly accurate
anchor station and thus was used to improve the sensitivity limits of the global EHT array.
Figure 10. Mark 6 Playback Unit
Data Processing and Storage for
Black Hole Event Horizon Imaging
White Paper 9
Although operating as a single instrument spanning the globe, the EHT remains a mixture
of new and well-exercised stations, single-dish telescopes, and phased arrays with varying
designs and operations. Each observing cycle over the last several years was accompanied
by the introduction of new telescopes to the array, and/or significant changes and
upgrades to existing stations, data acquisition hardware, and recorded bandwidth.
EHT observations result in data spanning a wide range of signal-to-noise ratio (S/N) due to
the heterogeneous nature of the array, and the high observing frequency produced data
that were particularly sensitive to systematics in the signal chain. These factors, along with
the typical challenges associated with VLBI, motivated the development of specialized
processing and calibration techniques.
The end result of all this work, involving an intense international collaboration, was the
first image of a black hole event horizon.
The Future of Black Hole Observations
The EHT team’s initial goal was to image the event horizon of the massive black hole at
the center of our own Milky Way galaxy, SgrA*. However, this proved more difficult than
originally anticipated, since its structure changes on the timescale of minutes.
According to Max Planck Director, Anton Zensus5
, “the heart of our Milky Way is hidden in
a dense fog of charged particles. This leads to a flickering of the radio radiation and thus
to blurred images of the center of the Milky Way, which makes the measurements more
difficult. But I am confident that we will ultimately overcome this difficulty. On the other
hand, M87 is about 2,000 times further away. However, the black hole in its center is also
about 1,000 times more massive than the one in our Milky Way. The greater mass makes
up for the greater distance. The shadow of the black hole in M87 therefore appears to us
to be about half the size of the one from the gravity trap in our Milky Way.”
To date, the EHT has observed the black holes in just one wavelength—light with
a wavelength of 1.3 millimeters. But the project soon plans to look at the 0.87-mm
wavelength as well, which should improve the angular resolution of the array. It should
also be possible to sharpen the existing images using additional algorithmic processing.
As a consequence, we should expect better images of M87 and other black holes in the
not too distant future. The addition of more participating radio telescope sites will also
help improve the observational imaging.
The EHT team also wants to move from only ground-based VLBI to space-based imaging
using a space-based radio telescope. Going into space would allow the EHT to have
radio telescopes that are even further apart and thus able to capture some even more
astounding and higher resolution images of the black holes around us. “We could make
movies instead of pictures,” EHT Director Sheperd Doeleman said in an EHT talk at the
South by Southwest (SXSW) festival in Austin, Texas5
. “We want to make a movie in real time
of things orbiting around the black hole. That’s what we want to do over the next decade.”
5 “An astounding coincidence with theory”, Anton Zensus, Max Planck Director interview,
10 Data Processing and Storage for Black Hole Event Horizon Imaging
One of the big obstacles to using a space based EHT dish is data transmission. For the
ground-based experiments, HDDs were physically transported from the telescope sites
to central processing facilities at MPI and MIT. It is not clear yet how data would be sent
from the space telescopes to earth, but laser communication links are one possibility.
Transferring large amounts of data to the ground may require substantial onboard data
storage and a geosynchronous satellite acting as a relay (Figure 6).
Deepening our understanding the universe around us requires sophisticated IT
infrastructure. This includes ever more digital processing with more advanced algorithms,
using faster and more sophisticated servers and fast as well as vast digital storage for
capturing and processing the sea of data generated by big international science projects,
such as the Event Horizon Telescope project.
Infrastructure Possibilities of Future Black Hole Observations
Future work to gain higher resolution and even time sequenced data (e.g. videos) of
black hole event horizons (including the black hole at the center of our Milky Way galaxy)
will involve new data and more sophisticated analysis algorithms as well as the use of
radio telescopes in space. These efforts can leverage the massive improvements already
available in today’s state-of-the-art IT Infrastructure.
The core count of the 10 Twin systems used in the current correlator could be achieved
with a single Supermicro BigTwin™ multi-node system with 2U 4 Nodes, Dual Socket 205W
Intel Xeon Scalable Processors, 24 DIMMS DDR4 memory with 6 All-Flash NVMe drives per
node. The system delivers better density and improved power efficiency.
5 Effort to 5 SXSW 2019 Panel, Event Horizon Telescope: A Planetary Photograph a Black Hole, Sheperd Doeleman,
Dimitrios Psaltis, Sera Markoff, Peter Galison,
Figure 11. Supermicro BigTwin™
Data Processing and Storage for
Black Hole Event Horizon Imaging
White Paper 11
The rendering of images could be accelerated from hours to seconds with advanced
GPU systems such as a 1U 4-GPU Server, Figure 12.
The 960 terabytes of data could be stored on a single 1U Petascale server with the
All-Flash NVMe Solid State drives with order of magnitude better performance,
reduced latency and eliminating environmental issues introduced from the high
altitude, Figure 13.
These are just a few examples of the new state-of-the-art IT Infrastructure available to
researchers across the globe to support and enhance future research and discovery.
Figure 12. Supermicro GPU-optimized server systems
Figure 13. Supermicro All-Flash NVMe storage systems
Data Processing and Storage for
Black Hole Event Horizon Imaging
White Paper 12
Super Micro Computer, Inc.
980 Rock Avenue
San Jose, CA 95131 USA
No part of this document covered by copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical,
including photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission of the
copyright owner.
Supermicro, the Supermicro logo, Building Block Solutions, We Keep IT Green, SuperServer, Twin, BigTwin, TwinPro, TwinPro²,
SuperDoctor are trademarks and/or registered trademarks of Super Micro Computer, Inc.
Ultrabook, Celeron, Celeron Inside, Core Inside, Intel, Intel Logo, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Inside
Logo, Intel vPro, Itanium, Itanium Inside, Pentium, Pentium Inside, vPro Inside, Xeon, Xeon Phi, and Xeon Inside are trademarks of
Intel Corporation in the U.S. and/or other countries.
All other brands names and trademarks are the property of their respective owners.
© Copyright 2019 Super Micro Computer, Inc. All rights reserved.
Printed in USA Please Recycle
About the Author
Tom Coughlin, President, Coughlin Associates is a digital storage analyst as well as a business and technology
consultant. He has over 37 years in the data storage industry with engineering and management positions at
several companies.
Dr. Coughlin has many publications and six patents to his credit. Tom is also the author of Digital Storage
in Consumer Electronics: The Essential Guide, which is now in its second edition with Springer. Coughlin
Associates provides market and technology analysis as well as Data Storage Technical and Business
Consulting services. Tom publishes the Digital Storage Technology Newsletter, the Media and Entertainment
Storage Report, the Emerging Non-Volatile Memory Report and other industry reports. Tom is also a regular contributor on digital
storage for and other blogs.
Tom is active with SMPTE (Journal article writer and Conference Program Committee), SNIA (including a founder of the SNIA SSSI),
the IEEE, (he is past Chair of the IEEE Public Visibility Committee, Past Director for IEEE Region 6, President of IEEE USA and active in
the Consumer Electronics Society) and other professional organizations. Tom is the founder and organizer of the Storage Visions
Conference ( as well as the Creative Storage Conference ( He was the general
chairman of the annual Flash Memory Summit for 10 years. He is a Fellow of the IEEE and a member of the Consultants Network of
Silicon Valley (CNSV). For more information on Tom Coughlin and his publications and activities go to
About Super Micro Computer, Inc.
(NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of
advanced server Building Block Solutions®
for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded
Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green®
” initiative and provides
customers with the most energy-efficient, environmentally-friendly solutions available on the market. Amazon’s Arm-based Graviton2 Against AMD and Intel: Comparing Cloud Compute Amazon’s Arm-based Graviton2 Against AMD and Intel: Comparing Cloud Compute

Amazon’s Arm-based Graviton2 Against AMD and Intel: Comparing Cloud Compute
by Andrei Frumusanu on March 10, 2020 8:30 AM EST
Posted in
Cloud Computing
Neoverse N1
+ Add A

It’s been a year and a half since Amazon released their first-generation Graviton Arm-based processor core, publicly available in AWS EC2 as the so-called ‘A1’ instances. While the processor didn’t impress all too much in terms of its performance, it was a signal and first step of what’s to come over the next few years.

This year, Amazon is doubling down on its silicon efforts, having announced the new Graviton2 processor last December, and planning public availability on EC2 in the next few months. The latest generation implements Arm’s new Neoverse N1 CPU microarchitecture and mesh interconnect, a combined infrastructure oriented platform that we had detailed a little over a year ago. The platform is a massive jump over previous Arm-based server attempts, and Amazon is aiming for nothing less than a leading competitive position.

Amazon’s endeavours in designing a custom SoC for its cloud services started back in 2015, when the company acquired Isarel-based Annapurna Labs. Annapurna had previously worked on networking-focused Arm SoCs, mostly used in products such as NAS devices. Under Amazon, the team had been tasked with creating a custom Arm server-grade chip, and the new Graviton2 is the first serious attempt at disrupting the space.

So, what is the Graviton2? It’s a 64-core monolithic server chip design, using Arm’s new Neoverse N1 cores (Microarchitectural derivatives of the mobile Cortex-A76 cores) as well as Arm’s CMN-600 mesh interconnect. It’s a pretty straightforward design that is essentially almost identical to Arm’s 64-core reference N1 platform that the company had presented back a year ago. Amazon did diverge a little bit, for example the Graviton2’s CPU cores are clocked in at a bit lower 2.5GHz as well as including only 32MB instead of 64MB of L3 cache into the mesh interconnect. The system is backed by 8-channel DDR-3200 memory controllers, and the SoC supports 64 PCIe4 lanes for I/O. It’s a relatively textbook design implementation of the N1 platform, manufactured on TSMC’s 7nm process node.

The Graviton2’s potential is of course enabled by the new N1 cores. We’ve already seen the Cortex-A76 perform fantastically in last year’s mobile SoCs, and the N1 microarchitecture is expected to bring even better performance and server-grade features, all whilst retaining the power efficiency that’s made Arm so successful in the mobile space. The N1 cores remain very lean and efficient, at a projected ~1.4mm² for a 1MB L2 cache implementation such as on the Graviton2, and sporting excellent power efficiency at around ~1W per core at the 2.5GHz frequency at which Amazon’s new chip arrives at.

Total power consumption of the SoC is something that Amazon wasn’t too willing to disclose in the context of our article – the company is still holding some aspects of the design close to its chest even though we were able to test the new chipset in the cloud. Given the chip’s more conservative clock rate, Arm’s projected figure of around 105W for a 64-core 2.6GHz implementation, and Ampere’s recent disclosure of their 80-core 3GHz N1 server chip coming in at 210W, we estimate that the Graviton2 must come in around anywhere between 80W as a low estimate to around 110W for a pessimistic projection.

Testing In The Cloud With EC2
Given that Amazon’s Graviton2 is a vertically integrated product specifically designed for Amazon’s needs, it makes sense that we test the new chipset in its intended environment (Besides the fact that it’s not available in any other way!). For the last couple of weeks, we’ve had preview access for Amazon Web Services (AWS) Elastic Compute Cloud (EC2) new Graviton2 based “m6g” instances.

For readers unfamiliar with cloud computing, essentially this means we’ve been deploying virtual machines in Amazon’s datacentres, a service for which Amazon has become famous for and which now represents a major share of the company’s revenues, powering some of the biggest internet services on the market.

An important metric determining the capabilities of such instances is their type (essentially dictating what CPU architecture and microarchitecture powers the underlying hardware) and possible subtype; in Amazon’s case this refers to variations of platforms that are designed for specialised use-cases, such as having better compute capabilities or having higher memory capacity capabilities.

For today’s testing we had access to the “m6g” instances which are designed for general purpose workloads. The “6” in the nomenclature designates Amazon’s 6th generation hardware in EC2, with the Graviton2 currently being the only platform holding this designation.

Instance Throughput Is Defined in vCPUs
Beyond the instance type, the most important other metric that defined an instance’s capabilities is its vCPU count. “Virtual CPUs” essentially means your logical CPU cores that’s available to the virtual machine. Amazon offers instances ranging from 1 vCPU to up to 128, with the most common across the most popular platforms coming in sizes of 2, 4, 8, 16, 32, 48, 64, and 96.

The Graviton2 being a single-socket 64-core platform without SMT means that the maximum available vCPU instance size is 64.

However, what this also means, is that we’re quite in a bit of an apples-and-oranges conundrum of a comparison when talking about platforms which do come with SMT. When talking about 64 vCPU instances (“16xlarge” in EC2 lingo), this means that for a Graviton2 instance we’re getting 64 physical cores, while for an AMD or Intel system, we’d be only getting 32 physical cores with SMT. I’m sure there will be readers who will be considering such a comparison “unfair”, however it’s also the positioning that Amazon is out to make in terms of delivered throughput, and most importantly, the equivalent pricing between the different instance types.

Today’s Competition
Today’s article will focus around two main competitors to the Graviton2: AMD EPYC 7571 (Zen1) powered m5a instances, and Intel Xeon Platinum 8259CL (Cascade Lake) powered m5n instances. At the moment of writing, these are the most powerful instances available from the two x86 incumbents, and should provide the most interesting comparison data.

It’s to be noted that we would have loved to be able to include AMD EPYC2 Rome based (c5a/c5ad) instances in this comparison; Amazon had announced they had been working on such deployments last November, but alas the company wasn’t willing to share with us preview access (One reason given was the Rome C-type instances weren’t a good comparison to the Graviton2’s M-type instance, although this really doesn’t make any technical sense). As these instances are getting closer to preview availability, we’ll be working on a separate article to add that important piece of the puzzle of the competitive landscape.

Tested 16xlarge EC2 Instances
m6g m5a m5n
CPU Platform Graviton2 EPYC 7571 Xeon Platinum 8259CL
vCPUs 64
Cores Per Socket 64 32 24
(16 instantiated)
SMT – 2-way 2-way
CPU Sockets 1 1 2
Frequencies 2.5GHz 2.5-2.9GHz 2.9-3.2GHz
Architecture Arm v8.2 x86-64 + AVX2 x86-64 + AVX512
µarchitecture Neoverse N1 Zen Cascade Lake
L1I Cache 64KB 64KB 32KB
L1D Cache 64KB 32KB 32KB
L2 Cache 1MB 512KB 1MB
L3 Cache 32MB shared 8MB shared
per 4-core CCX 35.75MB shared
per socket
Memory Channels 8x DDR4-3200 8x DDR-2666
(2x per NUMA-node) 6x DDR4-2933
per socket
NUMA Nodes 1 4 2
TDP Estimated
80-110W? 180W 210W
per socket
Price $2.464 / hour $2.752 / hour $3.808 / hour
Comparing the Graviton2 m6g instances against the AMD m5a and Intel m5n instances, we’re seeing a few differences in the hardware capabilities that power the VMs. Again, the most notorious difference is the fact that the Graviton2 comes with physical core counts matching the deployed vCPU number, whilst the competition counts SMT logical cores as vCPUs as well.

Other aspects when talking about higher-vCPU count instances is the fact that you can receive a VM that spans across several sockets. AMD’s m5a.16xlarge here is still able to deploy the VM on a single socket thanks to the EPYC 7571’s 32 cores, however Intel’s Xeon system here employs two sockets as currently there’s no deployed Intel hardware in EC2 which can offer the required vCPU count in a single socket.

Both the EPYC 7571 and the Xeon Platinum 8259CL are parts which aren’t publicly available or even listed on either company’s SKU list, so these are custom parts for the likes of Amazon for datacentre deployments.

The AMD part is a 32-core Zen1 based single-socket solution (at least for the 16xlarge instances in our testing) clocking in at 2.5 GHz all-cores to up to 2.9GHz in lightly threaded scenarios. The peculiarity of this system is that it’s somewhat limited by AMD’s quad-chip MCM system which has four NUMA nodes (one per chip and 2-channel memory controller), a characteristic that’s been eliminated in the newer EPYC2 Zen2 based systems. We don’t have concrete confirmation on the data, but we suspect this is a 180W part based on the SKU number.

Intel’s Xeon Platinum 8259CL is based on the newer Cascade Lake generation CPU cores. This particular part is also specific to Amazon, and consists of 24 enabled cores per socket. To reach the 16xlarge 64 vCPU count, EC2 provides us a dual-socket system with 16 out of the 24 cores instantiated on each socket. Again, we have no confirmation on the matter, but these parts should be rated at 210W per socket, or 420W total. We do have to remind ourselves that we’re only ever using 66% of the system’s cores in our instance, although we do have access to the full memory bandwidth and caches of the system.

The cache configuration in particular is interesting here as things differ quite a bit between platforms. The private caches of the actual CPUs themselves are relatively self-explanatory, and the Graviton2 here does provide the highest capacity of cache out of the trio, but is otherwise equal to the Xeon platform. If we were to divide the available cache on a per-thread basis, the Graviton2 leads the set at 1.5MB, ahead of the EPYC’s 1.25MB and the Xeon’s 1.05MB. The Graviton2 and Xeon systems have the distinct advantage that their last level caches are shared across the whole socket, while AMD’s L3 is shared only amongst 4-core CCX modules.

The NUMA discrepancies between the systems aren’t that important in parallel processing workloads with actual multiple processes, but it will have an impact on multi-threaded as well as single-threaded performance, and the Graviton2’s unified memory architecture will have an important advantage in a few scenarios.

Finally, there’s quite a difference in the pricing between the instances. At $2.46 per hour, the Graviton2 system edges out the AMD system in price, and is massively cheaper than the $3.80 per hour cost of the Xeon based instance. Although when talking about pricing, we do have to remember that the actual value delivered will also wildly depend on the performance and throughput of the systems, which we’ll be covering in more detail later in the article.

We thank Amazon for providing us with preview access to the m6g Graviton2 instances. Aside from giving us access, Amazon nor any other of the mentioned companies have had influence in our testing methodology, and we paid for our EC2 instance testing time ourselves.


Dr. Susanne Vogl, Soziologie

Dr. Susanne Vogl, Soziologie

03.2018 – 01.2022
Post-doc Research Fellow
Institute of Education, University of Vienna

10.2014 – 12.2021

Post-doc University Assistant
Institute of Sociology, University of Vienna

since 2013

Post-doc Research Fellow
Institute of Sociology, Technical University Berlin, Germany

03.2014 – 10.2014

Post-doc Researcher
Ludwig Boltzmann Institute Health Promotion Research, Vienna
Research Stream „Health Promotion in Schools“

10.2006 – 03.2014

Wissenschaftliche Mitarbeiterin (equivalent to Lecturer)
Chair of Sociology and Empirical Social Science (Siegfried Lamnek)
Catholic University of Eichstaett-Ingolstadt, Germany

(on leave 04.2008 – 09.2009 & 04.2010 – 03.2014)

05.2019 University of Applied Sciences, Muttenz, Switzerland
09.2018 University of Applied Sciences, Krems, Austria
2017-2020 Department of Sociology, University of Vienna
02.2018 Medical University of Vienna, Austria

Visiting Lecturer at the German Youth Institute (DJI)


Visiting Lecturer at the University of Bern

10.2013 – 02.2014

Visiting Lecturer at the Technical University Berlin

2009 & 2011-12

Visiting Lecturer at the University of West of England, Bristol, U.K.


Visiting Lecturer at the Catholic University of Eichstaett-Ingolstadt

since 2007
Methological Cunsulting for students, researchers and external
research projects e.g. “childfriendly cities” (Kinderfreundliche Kommunen e.V.)


PhD in Sociology (summa cum laude)
Catholic University of Eichstaett-Ingolstadt, Germany

PhD thesis: “Alter & Methode: Ein Vergleich persönlicher und telefonischer Leitfadeninterviews mit Kindern” (Age & Method: A Comparison of semi-structured Telephone and Face-to-face Interviews with Children)

2000 – 2005

Diploma in Sociology (first class degree)
Catholic University of Eichstaett-Ingolstadt, Germany

2017 – 2018

Back-to-Research Grant
of the University of Vienna; Research Scholarship (20,000€)


Preis für herausragende Abschlussarbeit (award for excellent dissertation)
by Eichstätter Universitätsgesellschaft e.V.

2005 – 2011

PhD-Scholarship by Maximilian-Bickhoff-University Foundation

– – –


Lamnek, S.; Vogl, S.. Theorien abweichenden Verhaltens II. „Moderne“ Ansätze. [Theories of deviant behaviour II. modern approaches] 4th edition. Paderborn: Fink UTB. ISBN 978-3825217747


Vogl, S.: Interviews mit Kindern führen: Eine praxisorientierte Einführung. [Conducting Interviews with Children: A practical guide] In: Grundlagentexte Methoden. Weinheim: Beltz Juventa ISBN 978-3-7799-3304-5


Lamnek, S.; Luedtke, J.; Ottermann, R.; Vogl, S.: Tatort Familie: Häusliche Gewalt im gesellschaftlichen Kontext [Scene family: domestic violence in social context] Wiesbaden: VS Verlag ISBN 978-3-531-93127-2


Vogl, S.: Alter & Methode: Ein Vergleich telefonischer und persönlicher Leitfadeninterviews mit Kindern [Age & method: a comparison of qualitative telephone and face-to-face interviews with children] Wiesbaden: VS Verlag. ISBN 978-3-531-94308-4

Vogl, S.; Schmidt, E.-M.; Zartler, U.: Triangulating Perspectives: Ontology and Epistemology in Qualitative Multiple Perspective Research. International Journal of Social Research Methodology.
DOI: 10.1080/13645579.2019.1630901


Vogl, S.: Advance Letters in a Telephone Survey on Domestic Violence: Effect on Unit-Nonresponse and Reporting. In: International Journal of Public Opinion Research 31(2), 243-265.

Vogl, S. Integrating and Consolidating Data in Mixed Methods Data Analysis: Examples from Focus Group Data with Children. Journal of Mixed Methods Research (online first), 1–19.
DOI: 10.1177/1558689818796364

Vogl, S.; Zartler, U.; Schmidt, E.-M.; Rieder, I.:
Developing an analytical framework for multiple perspective, qualitative longitudinal interviews (MPQLI). In: International Journal of Social Research Methodology.21(2), pp 177-190.
DOI: 10.1007/s11577-017-0461-2


Vogl, S.: Quantifizierung. Datentransformation von qualitativen Daten in quantitative Daten in Mixed-Methods-Studien. In: Kölner Zeitschrift für Soziologie und Sozialpsychologie. 69(2), 287-312.
DOI: 10.1080/13645579.2017.1345149


Vogl, S.: Children’s Verbal, Interactive and Cognitive Skills and Implications for Interviews. In: Quality & Quantity. 49 (1), 319-338.
DOI: 10.1007/s11135-013-9988-0


Christopoulos, D.; Vogl, S.: The Motivation of Social Entrepreneurs: The roles, agendas and relations of altruistic economic actors. In: The Journal of Social Entrepreneurship 6(1), 1-30.
DOI: 10.1080/19420676.2014.954254


Vogl, S.: Telephone versus Face-to-Face Interviews: Mode Effect on Semi-Structured Interviews with Children. In: Sociological Methodology. 43 (1), pp. 138-182.
DOI: 10.1177/0081175012465967


Krell, C.; Vogl, S.: Parental leave, parenting benefits and their potential effect on father’s participation in Germany In: International Journal of Sociology of the Family. 38 (1), pp. 19-38.


Vogl, S.: Children between the age of 5 and 11: What ‘don’t know’ answers tell us. In: Quality & Quantity. 46 (4), pp. 993-1011.
DOI: 10.1007/s11135-011-9438-9


Vogl, S.: Gruppendiskussionen mit Kindern – methodische und methodologische Besonderheiten (Focus groups with children – methodical and methodological peculiarities). In: ZA-Information. 57, pp. 28-60


Vogl, S., Parsons, J., Owens, L., Lavrakas, P. (forthcoming): Experiments on the Effects of Advance Letters in Surveys. In: Lavrakas, P. J., de Leeuw, E., Holbrook, A., Kennedy, C., Traugott, M. W., West, B.: Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment. New York: Wiley.

2019 Baur, N., Fülling, J., Hering, L., Vogl, S. (forthcoming): Die Verzahnung von Arbeit und Komsum (The Interrelation of Work and Consumption) In: Becke, G., Ernst, S. (Ed.) Transformation der Arbeitsgesellschaft. Wiesbaden: Springer.
2019 Vogl, S.: Empirische Forschung zu Alter und Altern: qualitative, quantitative und Mixed Methods Zugänge. In: Kolland, F.; Gallistl, V.; Parisot, V. (eds.) Kulturgerontologie. Springer.
2019 Vogl, S.; Parzer, M.; Astleithner, F.; Mataloni, B.: Heterognity in Neue Mittelschule: Reproduction of Social Inequality. In: Flecker, J., Wöhrer, V., Rieder, I. (eds.) Pathways to the Future. Vienna University Press.
2019 Astleithner, F.; Vogl, S.; Mataloni, B.: Educational Aspirations of Young People. In: Flecker, J., Wöhrer, V., Rieder, I. (eds.) Pathways to the Future. Vienna University Press.
2019 Mataloni, B.; Astleithner, F.; Vogl, S.: (Critical) Events in the Life of Adolescents: Evidence from an Online-Survey. In: Flecker, J., Wöhrer, V., Rieder, I. (eds.) Pathways to the Future. Vienna University Press.
2019 Zartler, U.; Vogl, S.; Wöhrer, V.: Family as a Ressource? The Role of the Family in Young Peoples’ Transitions. In: Flecker, J., Wöhrer, V., Rieder, I. (eds.) Pathways to the Future. Vienna University Press.
2019 Vogl, S.; Jesser, A.; Wöhrer, V.: Methodological Design of the first wave of „Pathways to the Future“. In: Flecker, J.; Wöhrer, V.; Rieder, I. (eds.) Pathways to the Future. Vienna University Press.
2019 Vogl, S.: Mit Kindern Interviews führen: Ein praxisorientierter Überblick. (Interviewing Children). In Hedderich, I.; Butschi, C.; Reppin, J. (eds): Perspektiven auf Vielfalt in der frühen Kindheit – Mit Kindern Diversität erforschen.

Schmidt, E-M., Zartler, U., Vogl, S. Swimming against the tide? Austrian couples’ non-traditional work-care arrangements in a traditional environment. In: Grunow, D.; Evertson, M. (Eds): New Parents in Europe: Work-Care Practices: Gender Norms and Family Policies. Edgar Elgar.

2019 Vogl, S.: Gruppendiskussionen (Focus Groups) In: Baur, Nina; Blasius, Jörg (Eds): Handbuch der empirischen Sozialforschung (2nd edition). Wiesbaden: VS Verlag.
2018 Vogl, S.: Interviewing Children through Telephones. In: Child Barometer Finland.

Vogl, S.: Kinder befragen: Erfahrungen und Reflexionen aus der sozialwissenschaftlichen Praxis . In: TPS Spektrum, 8, pp. 48-51.


Vogl, S.: Gruppendiskussionen (Focus Groups) In: Baur, N.; Blasius, J. (Eds): Handbuch der empirischen Sozialforschung. Wiesbaden: VS Verlag.


Krell, C.; Vogl, S.: Gewalt in der Familie (Domestic Violence) In: Horn, Klaus/Peter/Kemnitz, Heidemarie/Marotzki, Winfried/Sandfuchs, Uwe (Eds): Lexikon der Erziehungswissenschaft. Bad Heilbrunn: Julius Klinkhardt.


Vogl, S.: Focus Groups With Children. In: Fiedler, Julia; Posch, Christian (Eds): Yes, they can! Children Researching Their Lives. Hohengehren: Schneider. pp. 86-98


Vogl, S., Baur, N.: The Social Construction of Gender and Lifestyles: Theoretical Concept for Gender and Family Research. In: Working Paper Series. Institut für Soziologie, Wien.
DOI: 10.25365/phaidra.46

2017 Flecker, J., Jesser, A., Mataloni, B., Pohn-Lauggas, M., Reinprecht, C., Schlechter, M., Schmidt, A., Vogl, S., Wöhrer, V., Zartler, U.:
Die Vergesellschaftung Jugendlicher im Längsschnitt: Teil 2: Forschungsdesign und methodische Überlegungen einer Untersuchung in Wien.
.Working Paper 05/2017, Institut für Soziologie, Wien.

Flecker, J., Jesser, A., Mataloni, B., Pohn-Lauggas, M., Reinprecht, C., Schlechter, M., Vogl, S., Wöhrer, V., Zartler, U.: Die Vergesellschaftung Jugendlicher im Längsschnitt: Teil 1:
Theoretische Ausgangspunkte für eine Untersuchung in Wien.
Working Paper Series. Institut für Soziologie, Wien.


Flaschberger, E., Vogl, S.: Auswertung der Feedbackbögen zum 2.WieNGS-Jour Fixe der Stufe 4 und zum 2. gemischten WieNGS-Jour Fixe des Schuljahres 2013/2014. LBIHPR Forschungsbericht. Wien: LBIHPR.

Vogl, S.; Flaschberger, E. Reflexionsgruppen mit den GesundheitskoordinatorInnen der Wiener Netzwerk Gesundheitsfördernde Schulen (WieNGS) 2014. LBIHPR Forschungsbericht. Wien: LBIHPR


Felder-Puig, R.; Vogl,S., Gugglberger, L.: Projekt ‘Gesunde BMHS’ Erhebungsinstrumente für die Online-Befragung. LBIHPR Forschungsbericht. Wien: LBIHPR


Gugglberger,L., Vogl, S.: Jahresgespräch mit der Steuergruppe des WieNGS 2014. Protokoll. Wien: LBIHPR Forschungsbericht.


Vogl, S.: Kinderwunsch, Elternschaft, Elternzeit und Elterngeld In: Zentralinstitut für Ehe und Familie in der Gesellschaft (ZFG) an der Katholischen Universität Eichstätt-Ingolstadt (Hrsg.): Familien-Prisma. Autumn 2010. pp. 51-56


Schmalz, S.; Vogl, S.: Gewalt an Schulen in Bayern In: Agora 2/2010. Eichstaett.


Vogl, S.: Familie: Wunsch und Wirklichkeit. In: Agora. 2/2009. Eichstaett


Vogl, S.: Methodological issues of applying focus groups with children ESA 9th Conference. Conference CD, Full Paper. Lisbon.


Vogl, S.: Gewalt von Frauen gegen Männer. In: Agora. 1/2009. Eichstaett.


Vogl, S.: Alter & Methode – Ein Vergleich von telefonischen und persönlichen Leitfadeninterviews. In: Kuckartz, Udo (Ed.) CAQD 2007. Conference Volume. Marburg. pp. 44-55

Coronavirus: boom on the web for the image of the doctor cradling Italy

Coronavirus: boom on the web for the image of the doctor cradling Italy

“It is a very beautiful and moving image that the author Franco Rivolli sent us and that we thought to share given the moment that our entire country is experiencing for the coronavirus emergency”. He seems not at all surprised by the attention and sharing gathered by that image of the doctor with a mask at work lovingly cradling Italy, Luca Sanzo, president of the section of the National Carabinieri Association of Chiaravalle Centrale in the province of Catanzaro.

“I immediately thought – Sanzo adds – of sharing on Facebook that image which in many ways encompasses what is the current situation of our country. A touching image created by a friend, Franco Rivolli, who has gone around social networks and and not only. And with respect to which at a time like this we can really find ourselves all a bit “. There are already a few tens of thousands of shares on the social network since the image was disseminated.