zdnet.com: Build your own monster AMD Ryzen Threadripper 3990X system for under $7,000

zdnet.com: Build your own monster AMD Ryzen Threadripper 3990X system for under $7,000

By Adrian Kingsley-Hughes for Hardware 2.0 | February 17, 2020 — 13:40 GMT (13:40 GMT) | Topic: Hardware

Got your eye on AMD’s new silicon behemoth, the 64-core, $3,990 Ryzen Threadripper 3990X CPU? Here’s what you need to build a complete system.

Got your eye on the new monster 64-core AMD Ryzen Threadripper 3990X CPU? With the CPU itself costing almost the same as four iPhone 11 Pro Max handsets, a system like this is never going to be classed as “budget” or “affordable,” but you will get an absolute beast of a system that will tackle pretty much anything you can throw at it.

So, here’s what you need:

Disclosure: ZDNet may earn an affiliate commission from some of the products featured on this page. ZDNet and the author were not compensated for this independent review.
AMD Ryzen Threadripper 3990X
Let’s start with the CPU

Here’s the core of the build — the AMD Ryzen Threadripper 3990X. The core count is a world’s first for a HEDT (High-End Desktop) processor, and it is squarely aimed at the high-end 3D rendering, visual effects, and video professionals.

CPU Socket Type: Socket sTRX4
# of Cores: 64-Core
# of Threads: 128
Operating Frequency: 2.9 GHz
Max Turbo Frequency: Up to 4.3 GHz
L1 Cache: 4MB
L2 Cache: 32MB
L3 Cache: 256MB
Architecture: 7nm
RAM Types: DDR4 3200
Memory Channel: 4
PCI Express Revision: 4.0
Thermal Design Power: 280W

$3,990 at Newegg

Must read: Don’t buy these Apple products: February 2020 edition
GIGABYTE TRX40 AORUS MASTER

The Motherboard

There are over a dozen Socket TRX40 motherboards available, and one of those is the GIGABYTE TRX40 AORUS MASTER (the processor socket is different to older Threadripper CPUs).

Apart from having an updated socket, this board is designed from the ground up to cope with the thermal demands of the Threadripper 3990X. It also features a whole host of things you expect on a high-end board, from 3 x M.2 ports, support for 256GB of RAM, 6 x SATA ports, and loads of USB ports.
$500 at Newegg
G.SKILL Trident Z Neo Series 32GB (4 x 8GB) RAM
The RAM
2020-02-17-13-10-25.jpg

The motherboard supports 256GB of RAM, but let’s curtail out enthusiasm by only — only! — installing 32GB.

Capacity: 32GB (4 x 8GB)
Type: 288-Pin DDR4 SDRAM
Speed: DDR4 3800 (PC4 30400)
CAS Latency: 14
Timing: 14-16-16-36
Voltage: 1.5V
Heat Spreader: Yes
Features: Designed and tested for AMD Ryzen 3000 series CPUs, optimized compatibility with AMD X570 chipset, sleek dual-tone aluminum heatspreader design, and fully customizable RGB lighting support

$480 at Newegg
Samsung 970 EVO M.2 2280 2TB SSD
The Storage
2020-02-17-13-14-35.jpg

Since the motherboard has support for three M.2 drives, and this is a high-performance system, we might as well make use of them by installing a 2TB Samsung 970 EVO M.2 2280 SSD (if you feel you need more space, add another, or some spinning storage to suit).

Form Factor: M.2 2280
Capacity: 2TB
Memory Components: 64L V-NAND MLC
Interface: PCIe Gen3. X4, NVMe 1.3
Controller: Samsung Phoenix Controller
Cache: 2GB LPDDR4 DRAM

$700 at Newegg
XFX Radeon VII RX-VEGMA3FD6 16GB 4096-Bit HBM2
The Graphics Card
2020-02-17-13-18-51.jpg

A powerful CPU needs a powerful GPU, and they don’t come much more powerful than this XFX Radeon VII. If one isn’t enough, you can throw a few more into the system!

Core Clock: 1400 MHz
Boost Clock: 1750 MHz
Stream Processors: 3840 Stream Processors
Effective Memory Clock: 1 GHz (2.0 Gbps)
Memory Size: 16GB
Memory Interface: 4096-Bit
Memory Type: HBM2
Max Resolution: 4096 x 2160
Cooler: Triple Fans
Thermal Design Power: 295W

$550 at Newegg
Cooler Master Wraith Ripper
The Cooler
2020-02-17-13-25-09.jpg

No, a CPU that costs almost $4,000 doesn’t come with a cooler!

This seven heatpipe, dual heatsink cooler is more than up to the job of cooling the Threadripper 3990X!
$222 at Newegg
Seasonic PRIME TX-1000 Ultra-Titanium 1000W PSU
The Power Supply Unit
2020-02-17-13-29-14.jpg

A fully-modular PSU (which helps reduce clutter and helps increase airflow) that features a Fanless Mode for when quiet is needed. But this unit comes with everything you need to power a monster PC.
$280 at Newegg
Windows 10 Pro
The Operating System

There’s some debate about whether the 54-core/128-thread Threadripper 3990X will work optimally with Windows 10 Pro, or whether it needs Windows 10 Pro for Workstation. Early testing suggested Windows 10 Pro created bottlenecks, but a statement from AMD claimed this was a testing error on the part of the tester.
$200 at Newegg

The starting price for this system is $6,922, but if you start adding multiple M.@ SSDs, more GPUs, or you want to take the RAM up closer to the 256GB limit, then this will quickly spiral upwards!

zdnet.com: Look what’s inside Linus Torvalds’ latest Linux development PC

zdnet.com: Look what’s inside Linus Torvalds’ latest Linux development PC

The Linux creator recently announced he’d upgraded his main PC to a speedy AMD Threadripper 3970x-based processor. But a computer is more than a CPU. In an exclusive conversation, Torvalds talked about what’s what with his latest system.

By Steven J. Vaughan-Nichols for Linux and Open Source | May 27, 2020 — 13:59 GMT (14:59 BST) | Topic: PCs

In a Linux Kernel Mailing List (LKML), Linus Torvalds, Linux’s top developer, talked about the latest progress in the next version of Linux: Linux 5.7-rc7. Along the way, he mentioned, “for the first time in about 15 years, my desktop isn’t Intel-based.” In his newest development box, he’s “rocking an AMD Threadripper 3970x.” But a computer is more than just a processor no matter how speedy it is, so I talked with Torvalds to see exactly what’s in his new box.

First, he’s already impressed by its performance:

“My ‘allmodconfig’ test builds are now three times faster than they used to be, which doesn’t matter so much right now during the calming down period, but I will most definitely notice the upgrade during the next merge window,” said Torvalds.

The AMD Threadripper 3970x comes with 32 cores. It’s built using AMD’s 7-nanometer “Zen 2” core architecture, and it boasts 88 PCIe 4.0. AMD claims it’s up to 90% faster than its competition. Phoronix’s independent tests found the “Threadripper 3970X absolutely dominates in performance and “outpaces the Core i9 10980XE.”

Torvalds is a build-your-own box type of guy.

“I typically build all my own machines. Usually they are frankenboxes — I’ll re-use the case or the SSD from the previous machine or something like that. This time it was an all-new build,” he said.

Why do it yourself?

“I don’t like having others build my machine, partly because I have my own specs I care most about, but partly I get self-conscious about getting donations that I no longer really need,” Torvalds said.

Before this latest build, his box was an i9-9900k. Normally, Torvalds would just pop into the local Fry’s to pick up some of the more basic stuff directly, but with the virus, this time it was all from Amazon. The pieces came in over a few weeks (no more two-day shipping of computer parts these days); the last two pieces came last Friday.

So, here’s Torvald’s annotated hardware list:
disclosure

ZDNet may earn an affiliate commission from products featured on this page.

CPU

See it now: Ryzen Threadripper 3970X

“Initially, my plan was actually to do an AM4 board and just a Ryzen 3950X, which is the more mainstream upgrade,” Torvalds said, but the “Ryzen 3950X would have been an upgrade over the Intel i9-9900K, but it would only have been an incremental one.”

“Normally, I go for the regular consumer CPU’s, since they tend to be the best bang for the buck, and in the case of Intel CPU’s I actually like that they just have integrated graphics. I don’t care about the GPU very much, so an integrated one is fine, and it avoids the worry about picking the right GPU and possible noise-levels from one with bad fans.”

Torvalds went “back-and-forth about that for a while,” because, as he said:

“The Threadripper power use made me worry about noise. But I decided to do a big upgrade because unlike the traditional Intel Xeon high-core-count platform, AMD’s Threadripper series still falls in the ‘good bang for the buck.’ So I bit the bullet, and am so far quite pleased.”

Motherboard

See it now: Gigabyte Aorus TRX40 Master

Here, Torvald’s main concern was:

“A board that had what looked like good power delivery and fan control. In the builds I do, I really want the basics to be solid, and there’s little more basic than power delivery. Long long ago I had a couple of systems that were unreliable due to power brown-out, and I’ve gotten religious about this now. So I look for something that is good for overclocking, and then I do _not_ overclock things.”

In short, he wants a PC that can handle a heavy load, but he’s not going to push the machine to its limits. That said, Torvalds absolutely detests:

“The default fan settings of this motherboard (very whiny small high-RPM fan for VRM [Voltage Regulator Module] cooling), but you can tweak the BIOS settings to something much better. Also note to anybody else: This is an E-ATX board, so it can be inconvenient in the wrong case.”

Fan

See it now: Noctua NF-A14 PWM, Premium Quiet Fan

As you can tell, noise is a big issue for Torvalds. He cares deeply about it:

“So I want good fans and coolers, and I’ve had good experiences with Noctua before,” Torvalds said. “The extra fan is because I like that push-pull setup, and with a big 140mm Noctua fan running at low speed, I’m not worried about noise levels. Even when it ramps up under load, I don’t find the noise of those fans annoying. It’s more of a soothing ‘whoosh” white noise sound, none of the annoying whining or rattling that you get with bad fans.”

CPU Cooler

See it now: Noctua NH-U14S and Noctua NF-A15

Torvalds uses two CPU cooler fans. The NH-U14S is the main one, while the extra NF-A15 fan is for the push-pull configuration of that cooler.

With all his concern about noise, why not water-cooling, you ask?

“I’m not a fan of water-cooling. Reliability worries me, and I’m not convinced the AIO systems are any better than a good air cooling system. And the custom ones are way too much effort, and I worry about the pump and gurgling noises,” Torvalds said.

Case

See it now: Be Quiet Dark Base 700

For the case, it’s once again all about noise reduction.

“I like Noctua fans better than the Be Quiet ones for some reason. But Be Quiet would have been my second choice, and Noctua doesn’t make cases,” he said.

Extra fan

See it now: Silent Wings 3

Why an extra fan? Torvalds explained:

“The extra fan (the case comes with two already) is because I initially ordered the case, and then when looking at it I decided that it looks like the front intake looks more restricted than the back output (because of the front panel), and since I was waiting for other parts to be delivered anyway, I decided that an extra intake fan would be better for airflow, and hopefully cause positive case pressure and less worry about dust.”

In the end, all the effort to make a quiet powerful PC was worth it.

“With the right fan control setup in the BIOS (and assuming you picked the right fan headers: The motherboard paper manual had horrible pictures and I got the CPU and system fan header the wrong way round for the first build), you have a machine that is basically silent when idling, and without any annoying whine (but not silent) under full load.”

Power Supply Unit

See it now: Seasonic Focus GX-850

The GX-850 wasn’t Torvald’s first choice, but availability during the time of the coronavirus made it what it is, but “it should be solid,” Torvalds said. He cares deeply about power delivery basics:

“I basically go ‘what’s the top power use of the machine?,’ and then pick a power supply with a rating 2x that, and then look for reviews and reputable brands.”

Storage

See it now: 1TB Samsung EVO 970

When it comes to storage, Torvalds is solid-state drives (SSD) all the way:

“I’ve refused to touch spinning media for over a decade by now, and for the last several generations I’ve tried to avoid the hassle with cabling, etc., by just going with an m.2 form factor. I’ve had several of the Samsung SSD’s, they’ve been fine. A few generations ago there were lots of bad SSD’s, these days it’s much less of an issue, but I’ve stuck with what works for me.”

Memory

See it now: 4x16GB DDR4-2666

RAM proved to be a sore spot for Torvalds:

“This is actually the least favorite part of the build for me — it’s fine memory, but I really wanted ECC [Error-correcting code} memory. I had a hard time finding anything [priced sanely] on Amazon, so this I feel is a temporary ‘good enough for now’ thing that works fine in practice.”

Besides, he continued:

“I don’t actually even need 64GB of RAM, since the stuff I do doesn’t tend to be all that memory-intensive, but I wanted to populate all four memory channels, and RAM is cheap.”

While games and artificial intelligence and machine learning developers care deeply about graphics, video and image processing doesn’t interest Torvalds much. He used:

“Some random Sapphire RX580 graphics card. It’s overkill for what I do (desktop use, no games).”

Linux

See it now: Fedora 32

That’s it.

“Slap it all together, make sure you get all the fan settings right, and (in my case) install Fedora 32 on it, and you’ve got a fairly pleasant workstation,” Torvalds said.

While for his main workstation, Torvalds builds his own, he also has cutting edge OEM PCs for “access to new technology that I might not otherwise have bought myself.”

For his laptop, he uses a Dell XPS 13.

“Normally, Torvalds said, “I wouldn’t name names, but I’m making an exception for the XPS 13 just because I liked it so much that I also ended up buying one for my daughter when she went off to college.

Torvalds concluded:

“If the above makes you go ‘Linus has too much hardware,’ you’d be right. Usually I have one main box, and usually it’s something I built myself.”

Related Stories:

Build your own monster AMD Ryzen Threadripper 3990X system for under $7,000
AMD unveils world’s most powerful desktop CPUs
Linus Torvalds: ‘I’m not a programmer anymore

schedulix: The Open Source Enterprise Job Scheduling System

schedulix: The Open Source Enterprise Job Scheduling System

schedulix

Home
Highlights
Features
FAQ
Videos
Support
Downloads
Contact
Forum
User Feedback
News
Deutsch

The schedulix Feature Set
A solution for every task
schedulix Features
schedulix offers a huge seature set, which enables you, to meet all requirements considering your IT process automation in an efficient and elegant way.
User-defined exit state model

Complex workflows with branches and loops can be realised using batch hierarchies, dependencies and triggers by means of freely definable Exit State objects and how they are interpreted.
Job and batch dependencies

You can make sure that individual steps of a workflow are performed correctly by defining Exit State dependencies. Dependencies can be specified more precisely in addition to the required exit state by defining a condition.
Branches

Branches can be implemented in alternative sub-workflows using dependencies that have the exit state as a condition.
Hierarchical workflow modelling

Among other benefits, hierarchical definitions for work processes facilitate the modelling of dependencies, allow sub-processes to be reused and make monitoring and operations more transparent. The additional Milestone object type makes it easier to model complex workflows.
Job and batch parameters

Both static and dynamic parameters can be set for Submit batches and jobs.
Job result variable

Jobs can set any result variables via the API which can then be easily visualised in the Monitoring module.
Dynamic submits

(Sub-)workflows can be dynamically submitted or paralleled by jobs using the Dynamic Submit function.
Pipelining

Local dependencies between parts of the submitted batch instances are correctly assigned when parallelising batches using the Dynamic Submit function. This hugely simplifies the processing of pipelines.
Job and batch triggers

Dynamic submits for batches and jobs can be automated using exit state-dependent triggers.
This allows notifications and other automated reactions to workflow events to be easily implemented. In addition to the exit state and trigger type, events can also be specified more precisely by defining a condition. Asynchronous triggers enable events to be triggered during runtime. This also allows for reactions to runtime timeouts.
Loops

Automatic reruns of sub-workflows can be implemented by using triggers.
External jobs

So-called ‘pending’ jobs can be defined to swap out sub-workflows to external systems without overloading the system with placeholder processes.
Folders

Job, Batch and Milestone workflow objects can be orderly organised in a folder structure.
Folder parameters

All jobs below a folder can be centrally configured by defining parameters at folder level.
Folder environments

Requirements for static resources can be configured to be inherited by all jobs below a folder by defining folder environments.
This allows jobs to be assigned to different runtime environments (development, test, production, etc.) dependent upon a higher-level folder.
Folder resources

Resources can also be globally instanced at folder level as well as in the workflow environment, making them available to all the jobs below this folder.
Job and batch resources

Instancing resources at batch or job level allows a workflow load generated by hierarchically subordinate jobs to be locally controlled.
Static resources

Static resources can be used to define where a job is to be run. If the requested resources are available in multiple environments, the jobs are automatically distributed by the schedulix Scheduling System.
Load control

A quantity of available units of a resource can be defined for runtime environments using system resources. A quantity can be stated in the resource requirement for a job to ensure that the load on a resource is restricted.
Job priority

The job priority can be used to define which jobs are to take priority over other jobs when there is a lack of resources. Jobs can be prevented from ‘starving’ with an individually configured ‘priority aging’ which automatically raises their priority over the time span.
Load balancing

The interplay of static and system resources allows jobs to be automatically distributed over different runtime environments dependent upon which resources are currently available.
Synchronizing resources

Synchronising resources can be requested with different lock modes (no lock, shared, exclusive, etc.) or assigned them for synchronising independently started workflows.
Sticky allocations

Synchronising resources can be bound to a workflow across multiple jobs with sticky allocations to protect critical areas between two or more separately started workflows.
Resource states

A state model can be assigned to synchronising resources and the resource requirement can be defined dependent upon the state.
Automatic state changes can be defined dependent upon a job’s exit state.
Resource expirations

Resource requirements can define a minimum or maximum time interval in which the resource was assigned a new state. This allows actuality and queue conditions to be easily implemented.
Resource triggers

A reaction to the changing states of synchronising resources can be triggered with an automatic submit of a batch or job. After the state transition, the activation of the trigger can be more precisely specified with an extra condition.
Resource parameters

Resource parameters allow jobs to be configured dependent upon the allocated resource. Resource parameters of exclusively allocated resources can be written via the API. This allows resources to be used to store meta data.
Access controlling

Authentication routines for job servers, users and jobs using IDs and passwords are effective methods of controlling access to the system.
Time scheduling

The schedulix Time Scheduling module allows workflows to be automatically run at defined times based on complex time conditions. This usually obviates the need for handwritten calendars, although they can be used whenever required.
Web interface

The schedulix web front end allows standard browsers to be used for modelling, monitoring and operating in intranets and on the internet.
This obviates the need to run client software on the workstations.
API

The full API of the schedulix Scheduling System allows the system to be completely controlled from the command line or from programs (Java, Python, Perl, etc.).
Repository

The schedulix Scheduling System stores all the information about modelled workflows and the runtime data in an RDBMS repository.
All the information in the system can be accessed via the SCI (Standard Catalog Interface) whenever required using SQL.
SSL/TLS

The secure network communication of the schedulix components via SSL/TLS also fulfils more stringent security standards.
Back to the schedulix Homepage

Home »
© independIT Integrative Technologies GmbH
Imprint | Privacy

cnn.com: How Vietnam managed to keep its coronavirus death toll at zero

cnn.com: How Vietnam managed to keep its coronavirus death toll at zero

Live TV
How Vietnam managed to keep its coronavirus death toll at zero
By Nectar Gan, CNN
Updated 3:16 AM EDT, Sat May 30, 2020

(CNN)When the world looked to Asia for successful examples in handling the novel coronavirus outbreak, much attention and plaudits were paid to South Korea, Taiwan and Hong Kong.

But there’s one overlooked success story — Vietnam. The country of 97 million people has not reported a single coronavirus-related death and on Saturday had just 328 confirmed cases, despite its long border with China and the millions of Chinese visitors it receives each year.

This is all the more remarkable considering Vietnam is a low-middle income country with a much less-advanced healthcare system than others in the region. It only has 8 doctors for every 10,000 people, a third of the ratio in South Korea, according to the World Bank.

After a three-week nationwide lockdown, Vietnam lifted social distancing rules in late April. It hasn’t reported any local infections for more than 40 days. Businesses and schools have reopened, and life is gradually returning to normal.

Motorbike riders with face masks are stuck in traffic during the morning peak hour on May 19 in Hanoi.
Motorbike riders with face masks are stuck in traffic during the morning peak hour on May 19 in Hanoi.
To skeptics, Vietnam’s official numbers may seem too good to be true. But Guy Thwaites, an infectious disease doctor who works in one of the main hospitals designated by the Vietnamese government to treat Covid-19 patients, said the numbers matched the reality on the ground.

“I go to the wards every day, I know the cases, I know there has been no death,” said Thwaites, who also heads the Oxford University Clinical Research Unit in Ho Chi Minh City.

“If you had unreported or uncontrolled community transmission, then we’ll be seeing cases in our hospital, people coming in with chest infections perhaps not diagnosed — that has never happened,” he said.

So how has Vietnam seemingly bucked the global trend and largely escaped the scourge of the coronavirus? The answer, according to public health experts, lies in a combination of factors, from the government’s swift, early response to prevent its spread, to rigorous contact-tracing and quarantining and effective public communication.

Acting early
Vietnam started preparing for a coronavirus outbreak weeks before its first case was detected.

At the time, the Chinese authorities and the World Health Organization had both maintained that there was no “clear evidence” for human-to-human transmission. But Vietnam was not taking any chances.

“We were not only waiting for guidelines from WHO. We used the data we gathered from outside and inside (the country to) decide to take action early,” said Pham Quang Thai, deputy head of the Infection Control Department at the National Institute of Hygiene and Epidemiology in Hanoi.

A woman practises social distancing while shopping for groceries from behind a line at a wet market in Hanoi.
A woman practises social distancing while shopping for groceries from behind a line at a wet market in Hanoi.
By early January, temperature screening was already in place for passengers arriving from Wuhan at Hanoi’s international airport. Travelers found with a fever were isolated and closely monitored, the country’s national broadcaster reported at the time.

By mid-January, Deputy Prime Minister Vu Duc Dam was ordering government agencies to take “drastic measures” to prevent the disease from spreading into Vietnam, strengthening medical quarantine at border gates, airports and seaports.

On January 23, Vietnam confirmed its first two coronavirus cases — a Chinese national living in Vietnam and his father, who had traveled from Wuhan to visit his son. The next day, Vietnam’s aviation authorities canceled all flights to and from Wuhan.

As the country celebrated the Lunar New Year holiday, its Prime Minister Nguyen Xuan Phuc declared war on the coronavirus. “Fighting this epidemic is like fighting the enemy,” he said at an urgent Communist Party meeting on January 27. Three days later, he set up a national steering committee on controlling the outbreak — the same day the WHO declared the coronavirus a public health emergency of international concern.

On February 1, Vietnam declared a national epidemic — with just six confirmed cases recorded across the country. All flights between Vietnam and China were halted, followed by the suspension of visas to Chinese citizens the next day.

Over the course of the month, the travel restrictions, arrival quarantines and visa suspensions expanded in scope as the coronavirus spread beyond China to countries like South Korea, Iran and Italy. Vietnam eventually suspended entry to all foreigners in late March.

A Vietnamese People's Army officer stands next to a sign warning about the lockdown on the Son Loi commune in Vinh Phuc province on February 20.
A Vietnamese People’s Army officer stands next to a sign warning about the lockdown on the Son Loi commune in Vinh Phuc province on February 20.
Vietnam was also quick to take proactive lockdown measures. On February 12, it locked down an entire rural community of 10,000 people north of Hanoi for 20 days over seven coronavirus cases — the first large-scale lockdown known outside China. Schools and universities, which had been scheduled to reopen in February after the Lunar New Year holiday, were ordered to remain closed, and only reopened in May.

Thwaites, the infectious disease expert in Ho Chi Minh City, said the speed of Vietnam’s response was the main reason behind its success.

“Their actions in late January and early February were very much in advance of many other countries. And that was enormously helpful … for them to be able to retain control,” he said.

Meticulous contact-tracing
The decisive early actions effectively curbed community transmission and kept Vietnam’s confirmed cases at just 16 by February 13. For three weeks, there were no new infections — until the second wave hit in March, brought by Vietnamese returning from abroad.

Authorities rigorously traced down the contacts of confirmed coronavirus patients and placed them in a mandatory two-week quarantine.

“We have a very strong system: 63 provincial CDCs (centers for disease control), more than 700 district-level CDCs, and more than 11,000 commune health centers. All of them attribute to contact tracing,” said doctor Pham with the National Institute of Hygiene and Epidemiology.

A confirmed coronavirus patient has to give health authorities an exhaustive list of all the people he or she has met in the past 14 days. Announcements are placed in newspapers and aired on television to inform the public of where and when a coronavirus patient has been, calling on people to go to health authorities for testing if they have also been there at the same time, Pham said.

A woman stands in a queue to provide a sample at a makeshift testing centre near the Bach Mai hospital in Hanoi on March 31.
A woman stands in a queue to provide a sample at a makeshift testing centre near the Bach Mai hospital in Hanoi on March 31.
When the Bach Mai hospital in Hanoi, one of the biggest hospitals in Vietnam, became a coronavirus hotspot with dozens of cases in March, authorities imposed a lockdown on the facility and tracked down nearly 100,000 people related to the hospital, including medics, patients, visitors and their close contacts, according to Pham.

“Using contact-tracing, we located almost everyone, and asked them to stay home and self quarantine, (and that) if they have any symptoms, they can visit the health centers for free testing,” he said.

Authorities also tested more than 15,000 people linked to the hospitals, including 1,000 health care workers.

Vietnam’s contact-tracing effort was so meticulous that it goes after not only the direct contacts of an infected person, but also indirect contacts. “That’s one of the unique parts of their response. I don’t think any country has done quarantine to that level,” Thwaites said.

All direct contacts were placed in government quarantine in health centers, hotels or military camps. Some indirect contacts were ordered to self isolate at home, according to a study of Vietnam’s Covid-19 control measures by about 20 public health experts in the country.

A roadside barber donning a face mask gives a haircut to a customer in Hanoi.
A roadside barber donning a face mask gives a haircut to a customer in Hanoi.
As of May 1, about 70,000 people had been quarantined in Vietnam’s government facilities, while about 140,000 had undergone isolation at home or in hotels, the study said.

The study also found that of the country’s first 270 Covid-19 patients, 43 percent were asymptomatic cases — which it said highlighted the value of strict contact-tracing and quarantine. If authorities had not proactively sought out people with infection risks, the virus could have quietly spread in communities days before being detected.

Public communication and propaganda
From the start, the Vietnamese government has communicated clearly with the public about the outbreak.

Dedicated websites, telephone hotlines and phone apps were set up to update the public on the latest situations of the outbreak and medical advisories. The ministry of health also regularly sent out reminders to citizens via SMS messages.

Pham said on a busy day, the national hotlines alone could receive 20,000 calls, not to count the hundreds of provincial and district-level hotlines.

A propaganda poster on preventing the spread of the coronavirus is seen on a wall as a man smokes a cigarette along a street in Hanoi.
A propaganda poster on preventing the spread of the coronavirus is seen on a wall as a man smokes a cigarette along a street in Hanoi.
The country’s massive propaganda apparatus was also mobilized, raising awareness of the outbreak through loudspeakers, street posters, the press and social media. In late February, the health ministry released a catchy music video based on a Vietnamese pop hit to teach people how to properly wash their hands and other hygiene measures during the outbreak. Known as the “hand-washing song,” it immediately went viral, so far attracting more than 48 million views on Youtube.

Thwaites said Vietnam’s rich experience in dealing with infectious disease outbreaks, such as the SARS epidemic from 2002 to 2003 and the following avian influenza, had helped the government and the public to better prepare for the Covid-19 pandemic.

“The population is much more respectful of infectious diseases than many perhaps more affluent countries or countries that don’t see as much infectious disease — Europe, the UK and the US for example,” he said.

“The country understands that these things need to be taken seriously and complies with guidance from the government on how to prevent the infection from spreading.”

View on CNN

© 2020 Cable News Network. Turner Broadcasting System, Inc. All Rights Reserved.

Terms of Use | Privacy Policy | AdChoices

Dr. Ingo Sauer, Applied Econometrics and International Economic Policy

Dr. Ingo Sauer, Applied Econometrics and International Economic Policy

https://www.uni-frankfurt.de/
WirtschaftswissenschaftenFachbereich02

Kontakt
Personensuche

Intranet
Typo3

DE

Applied Econometrics and International Economic Policy

Fachbereich HomeApplied Econometrics and International Economic PolicyKlumpTeamAssistantsDr. Ingo Sauer

Dr. Ingo Sauer
Phone: +49 (69) 798-34781
E-Mail: isauer[at]wiwi.uni-frankfurt[dot]de
Address:

Goethe University Frankfurt
RuW, Postbox 47
Theodor-W.-Adorno-Platz 4
60629 Frankfurt am Main (Germany)
Room: RuW 4.218

Teaching

YouTube Kanal: WISSEN HAT KEINEN EIGENTÜMER

Übung zur Einführung in die Volkswirtschaftslehre, Goethe-Universität Frankfurt (Course Evaluation_1; Course Evaluation_2)

German and European Central Banking – International Summer Univeristy, Frankfurt University of Applied Sciences and Goethe-University (Course Evaluation)

Geld und Währung, DHBW Mannheim

Economics, Finance and Accounting, Graduate School Rhein Neckar (Course Evaluation)

Topical

Nominierter des Fachbereiches Wirtschaftswissenschaften zu 1822-Universitätspreis für exzellente Lehre (07/2019)

WISAG-Preis für die beste sozial- oder geisteswissenschaftliche Dissertation der Goethe-Universität (06/2019)

Publications

Sauer, I. (2015), Ownership Economics: On the Foundation of Interest, Money, Markets, Business Cycles and Economic Development. Edited by Frank Decker. Routledge Frontiers of Political Economy. Routledge, Oxford. Economica, 82: 581-582. doi:10.1111/ecca.12112

Sauer, I. (2012), The Dissolving Asset Backing of the Euro, CESifo Forum 13, Special Issue: The European Balance of Payments Crisis edited by Hans-Werner Sinn, 63-72, online available under: http://www.cesifo-group.de/DocDL/Forum-Sonderheft-Jan-2012.pdf

Sauer, I. (2011), Die sich auflösende Eigentumsbesicherung des Euro, ifo Schnelldienst 64(16), 31. August 2011, 58-68, online available under: http://www.cesifo-group.de/DocDL/SD-16-2011.pdf

Department EI
Home
Akuelles
Team
Prof. Dr. Rainer Klump
Beate Stein (Team Assistant)
Assistants
Dr. Anne Jurkat
Dr. Ingo Sauer
Julian Salg
Lecturers
Student Assistants
Alumni
Teaching
Research
Link

Top-Links

Bibliothek – BRuW
Lernplattform – OLAT
Prüfungsamt
Publikationsdatenbank
SSIX Info Center
Studienberatung
Vorlesungsverzeichnis – LSF

News

Keine Artikel in dieser Ansicht.

Kontakt

Goethe-Universität Frankfurt
Fachbereich Wirtschaftswissenschaften

Besucheradresse:
Campus Westend
Theodor-W.-Adorno-Platz 4
60323 Frankfurt am Main

Postadresse:
60629 Frankfurt am Main

Telefon: +49 (0)69/798-7749

Telefax: +49 (0)69/798-35000

Anfahrt & Lageplan

Besuchen Sie uns auf
Facebook
Instagram
Fachbereich

Daten und Fakten
Dekanat
Anfahrt
Kontakt

Studium

Bachelor
Master
Promotion
Ph.D.-Programme

Forschung

Publikationen
Cluster und Schwerpunkte
Nachwuchsförderung

International

Wirtschaftssprachen
Studieren im Ausland
Internationale Austauschstudierende

Presse & Medien

Imagebroschüre
Newsletter
Rss-Feeds

Akkreditierung

Die Goethe-Universität Frankfurt am Main
Impressum
Datenschutz

© 2004-2020 Goethe-Universität Frankfurt am Main
Top

Building a Raspberry Pi Cluster

Building a Raspberry Pi Cluster

Become a member
Sign in
Building a Raspberry Pi Cluster
Part III —OpenMPI, Python, and Parallel Jobs
Garrett Mills
Garrett Mills
Apr 29, 2019 · 10 min read

This is Part III in my series on building a small-scale HPC cluster. Be sure to check out Part I and Part II.

In the first two parts, we set up our Pi cluster with the SLURM scheduler and ran some test jobs using R. We also looked at how to schedule many small jobs using SLURM. We also installed software the easy way by running the package manager install command on all of the nodes simultaneously.

In this part, we’re going to set up OpenMPI, install Python the “better” way, and take a look at running some jobs in parallel to make use of the multiple cluster nodes.
Part 1: Installing OpenMPI
https://www.open-mpi.org/

OpenMPI is an open-source implementation of the Message Passing Interface concept. An MPI is a software that connects processes running across multiple computers and allows them to communicate as they run. This is what allows a single script to run a job spread across multiple cluster nodes.

We’re going to install OpenMPI the easy way, as we did with R. While it is possible to install it using the “better” way (spoiler alert: compile from source), it’s more difficult to get it to play nicely with SLURM.

We want it to play nicely because SLURM will auto-configure the environment when a job is running so that OpenMPI has access to all the resources SLURM has allocated the job. This saves us a lot of headache and setup for each job.
1.1 — Install OpenMPI

To install OpenMPI, SSH into the head node of the cluster, and use srun to install OpenMPI on each of the nodes:

$ sudo su –
# srun –nodes=3 apt install openmpi-bin openmpi-common libopenmpi3 libopenmpi-dev -y

(Obviously, replace –nodes=3 with however many nodes are in your cluster.)
1.2 — Test it out!

Believe it or not, that’s all it took to get OpenMPI up and running on our cluster. Now, we’re going to create a very basic hello-world program to test it out.

1.2.1 — Create a program.
We’re going to create a C program that creates an MPI cluster with the resources SLURM allocates to our job. Then, it’s going to call a simple print command on each process.

Create the file /clusterfs/hello_mpi.c with the following contents:

#include
#include int main(int argc, char** argv){
int node;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &node); printf(“Hello World from Node %d!\n”, node); MPI_Finalize();
}

Here, we include the mpi.h library provided by OpenMPI. Then, in the main function, we initialize the MPI cluster, get the number of the node that the current process will be running on, print a message, and close the MPI cluster.

1.2.2 — Compile the program.
We need to compile our C program to run it on the cluster. However, unlike with a normal C program, we won’t just use gcc like you might expect. Instead, OpenMPI provides a compiler that will automatically link the MPI libraries.

Because we need to use the compiler provided by OpenMPI, we’re going to grab a shell instance from one of the nodes:

login1$ srun –pty bash
node1$ cd /clusterfs
node1$ mpicc hello_mpi.c
node1$ ls
a.out* hello_mpi.c
node1$ exit

The a.out file is the compiled program that will be run by the cluster.

1.2.3 — Create a submission script.
Now, we will create the submission script that runs our program on the cluster. Create the file /clusterfs/sub_mpi.sh:

#!/bin/bashcd $SLURM_SUBMIT_DIR# Print the node that starts the process
echo “Master node: $(hostname)”# Run our program using OpenMPI.
# OpenMPI will automatically discover resources from SLURM.
mpirun a.out

1.2.4 — Run the job.
Run the job by submitting it to SLURM and requesting a couple of nodes and processes:

$ cd /clusterfs
$ sbatch –nodes=3 –ntasks-per-node=2 sub_mpi.sh
Submitted batch job 1211

This tells SLURM to get 3 nodes and 2 cores on each of those nodes. If we have everything working properly, this should create an MPI cluster with 6 nodes. Assuming this works, we should see some output in our slurm-XXX.out file:

Master node: node1
Hello World from Node 0!
Hello World from Node 1!
Hello World from Node 2!
Hello World from Node 3!
Hello World from Node 4!
Hello World from Node 5!

Part 2: Installing Python (the “better” way)

Okay, so for a while now, I’ve been alluding to a “better” way to install cluster software. Let’s talk about that. Up until now, when we’ve installed software on the cluster, we’ve essentially did it individually on each node. While this works, it quickly becomes inefficient. Instead of duplicating effort trying to make sure the same software versions and environment is available on every single node, wouldn’t it be great if we could install software centrally for all nodes?

Well, luckily a new feature in the modern Linux operating system allows us to do just that: compile from source! (/s) Rather than install software through the individual package managers of each node, we can compile it from source and configure it to be installed to a directory in the shared storage. Because the architecture of our nodes is identical, they can all run the software from shared storage.

This is useful because it means that we only have to maintain a single installation of a piece of software and its configuration. On the downside, compiling from source is a lot slower than installing pre-built packages. It’s also more difficult to update. Trade-offs.

In this section, we’re going to install Python3 from source and use it across our different nodes.
2.0 — Prerequisites

In order for the Python build to complete successfully, we need to make sure that we have the libraries it requires installed on one of the nodes. We’ll only install these on one node and we’ll make sure to only build Python on that node:

$ srun –nodelist=node1 bash
node1$ sudo apt install -y build-essential python-dev python-setuptools python-pip python-smbus libncursesw5-dev libgdbm-dev libc6-dev zlib1g-dev libsqlite3-dev tk-dev libssl-dev openssl libffi-dev

Hooo boy. That’s a fair number of dependencies. While you can technically build Python itself without running this step, we want to be able to access Pip and a number of other extra tools provided with Python. These tools will only compile if their dependencies are available.

Note that these dependencies don’t need to be present to use our new Python install, just to compile it.
2.1 — Download Python

Let’s grab a copy of the Python source files so we can build them. We’re going to create a build directory in shared storage and extract the files there. You can find links to the latest version of Python here, but I’ll be installing 3.7. Note that we want the “Gzipped source tarball” file:

$ cd /clusterfs && mkdir build && cd build
$ wget https://www.python.org/ftp/python/3.7.3/Python-3.7.3.tgz
$ tar xvzf Python-3.7.3.tgz
… tar output …
$ cd Python-3.7.3

At this point, we should have the Python source extracted to the directory /clusterfs/build/Python-3.7.3.
2.2 — Configure Python

For those of you who have installed software from source before, what follows is pretty much a standard configure;make;make install, but we’re going to change the prefix directory.

The first step in building Python is configuring the build to our environment. This is done with the ./configure command. Running this by itself will configure Python to install to the default directory. However, we don’t want this, so we’re going to pass it a custom flag. This will tell Python to install to a folder on the shared storage. Buckle up, because this may take a while:

$ mkdir /clusterfs/usr # directory Python will install to
$ cd /clusterfs/build/Python-3.7.3
$ srun –nodelist=node1 bash # configure will be run on node1
node1$ ./configure \
–enable-optimizations \
–prefix=/clusterfs/usr \
–with-ensurepip=install
…configure output…

2.3 — Build Python

Now that we’ve configured Python to our environment, we need to actually compile the binaries and get them ready to run. We will do this with the make command. However, because Python is a fairly large program, and the RPi isn’t exactly the biggest workhorse in the world, it will take a little while to compile.

So, rather than leave a terminal open the whole time Python compiles, we’re going to use our shiny new scheduler! We can submit a job that will compile it and we can just wait for the job to finish. To do this, create a submission script in the Python source folder:

#!/bin/bash
#SBATCH –nodes=1
#SBATCH –ntasks-per-node=4
#SBATCH –nodelist=node1cd $SLURM_SUBMIT_DIRmake -j4

This script will request 4cores on node1 and will run the make command on those cores. Make is the software tool that will compile Python for us. Now, just submit the job from the login node:

$ cd /clusterfs/build/Python-3.7.3
$ sbatch sub_build_python.sh
Submitted batch job 1212

Now, we just wait for the job to finish running. It took about an hour for me on an RPi 3B+. You can view its progress using the squeue command, and by looking in the SLURM output file:

$ tail -f slurm-1212.out # replace “1212” with the job ID

2.4 — Install Python

Lastly, we will install Python to the /clusterfs/usr directory we created. This will also take a while, though not as long as compiling. We can use the scheduler for this task. Create a submission script in the source directory:

#!/bin/bash
#SBATCH –nodes=1
#SBATCH –ntasks-per-node=1
#SBATCH –nodelist=node1cd $SLURM_SUBMIT_DIRmake install

However, we don’t want just any old program to be able to modify or delete the Python install files. So, just like with any normal program, we’re going to install Python as root so it cannot be modified by normal users. To do this, we’ll submit the install job as a root user:

$ sudo su –
# cd /clusterfs/build/Python-3.7.3
# sbatch sub_install_python.sh
Submitted batch job 1213

Again, you can monitor the status of the job. When it completes, we should have a functional Python install!
2.5 — Test it out.

We should now be able to use our Python install from any of the nodes. As a basic first test, we can run a command on all of the nodes:

$ srun –nodes=3 /clusterfs/usr/bin/python3 -c “print(‘Hello’)”
Hello
Hello
Hello

We should also have access to pip:

$ srun –nodes=1 /clusterfs/usr/bin/pip3 –version
pip 19.0.3 from /clusterfs/usr/lib/python3.7/site-packages/pip (python 3.7)

The exact same Python installation should now be accessible from all the nodes. This is useful because, if you want to use some library for a job, you can install it once on this install, and all the nodes can make use of it. It’s cleaner to maintain.
Part 3: A Python MPI Hello-World

Finally, to test out our new OpenMPI and Python installations, we’re going to throw together a quick Python job that uses OpenMPI. To interface with OpenMPI in Python, we’re going to be using a fantastic library called mpi4py.

For our demo, we’re going to use one of the demo programs in the mpi4py repo. We’re going to calculate the value of pi (the number) in parallel.
3.0 — Prerequisites

Before we can write our script, we need to install a few libraries. Namely, we will install the mpi4py library, and numpy. NumPy is a package that contains many useful structures and operations used for scientific computing in Python. We can install these libraries through pip, using a batch job. Create the file /clusterfs/calc-pi/sub_install_pip.sh:

#!/bin/bash
#SBATCH –nodes=1
#SBATCH –ntasks-per-node=1/clusterfs/usr/bin/pip3 install numpy mpi4py

Then, submit the job. We have to do this as root because it will be modifying our Python install:

$ cd /clusterfs/calc-pi
$ sudo su
# sbatch sub_install_pip.sh
Submitted batch job 1214

Now, we just wait for the job to complete. When it does, we should be able to use the mpi4py and numpy libraries:

$ srun bash
node1$ /clusterfs/usr/bin/python3
Python 3.7.3 (default, Mar 27 2019, 13:41:07)
[GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import numpy
>>> from mpi4py import MPI

3.1 — Create the Python program.

As mentioned above, we’re going to use one of the demo programs provided in the mpi4py repo. However, because we’ll be running it through the scheduler, we need to modify it to not require any user input. Create the file /clusterfs/calc-pi/calculate.py:

This program will split the work of computing our approximation of pi out to however many processes we provide it. Then, it will print the computed value of pi, as well as the error from the stored value of pi.
3.2 — Create and submit the job.

We can run our job using the scheduler. We will request some number of cores from the cluster, and SLURM will pre-configure the MPI environment with those cores. Then, we just run our Python program using OpenMPI. Let’s create the submission file /clusterfs/calc-pi/sub_calc_pi.sh:

#!/bin/bash
#SBATCH –ntasks=6cd $SLURM_SUBMIT_DIRmpiexec -n 6 /clusterfs/usr/bin/python3 calculate.py

Here, we use the –ntasks flag. Where the –ntasks-per-node flag requests some number of cores for each node, the –ntasks flag requests a specific number of cores total. Because we are using MPI, we can have cores across machines. Therefore, we can just request the number of cores that we want. In this case, we ask for 6 cores.

To run the actual program, we use mpiexec and tell it we have 6 cores. We tell OpenMPI to execute our Python program using the version of Python we installed.

Note that you can adjust the number of cores to be higher/lower as you want. Just make sure you change the mpiexec -n ## flag to match.

Finally, we can run the job:

$ cd /clusterfs/calc-pi
$ sbatch sub_calc_pi.sh
Submitted batch job 1215

3.3 — Success!

The calculation should only take a couple seconds on the cluster. When the job completes (remember — you can monitor it with squeue), we should see some output in the slurm-####.out file:

$ cd /clusterfs/calc-pi
$ cat slurm-1215.out
pi is approximately 3.1418009868930934, error is 0.0002083333033003

You can tweak the program to calculate a more accurate value of pi by increasing the number of intervals on which the calculation is run. Do this by modifying the calculate.py file:

if myrank == 0:
_n = 20 # change this number to control the intervals
n.fill(_n)

For example, here’s the calculation run on 500 intervals:

pi is approximately 3.1415929869231265, error is 0.0000003333333334

Conclusion

We now have a basically complete cluster. We can run jobs using the SLURM scheduler; we discussed how to install software the lazy way and the better way; we installed OpenMPI; and we ran some example programs that use it.

Hopefully, your cluster is functional enough that you can add software and components to it to suit your projects. In the fourth and final installment of this series, we’ll discuss a few maintenance niceties that are more related to managing installed software and users than the actual functionality of the cluster.

Happy Computing!

— Garrett Mills

Slurm
Raspberry Pi
Cluster
Glmdev
Python

Garrett Mills

Written by
Garrett Mills
Hi, there. I’m a speaker, developer, design addict who likes to make things: https://garrettmills.dev/
See responses (6)
More From Medium
How to validate the number of fields in a CSV file with Akka Stream and Alpakka CSV
Matteo Di Pirro in The Startup
Best hybrid apps frameworks in 2019
Ivano Di Gese in Better Programming
Tagless with Discipline — Testing Scala Code The Right Way
Marcin Rzeźnicki in Iterators
The Fun cron Tutorial
Félix Paradis in Better Programming
Functional Programming With Java: An Introduction
Ben Weidig in Better Programming
3 Reasons Why You Should Consider Going Serverless
Dmytro Khmelenko in Better Programming
The Most Efficient Way to Solve Any Linear Equation, in Three Lines of Code
Andre Ye in DataSeries
Intro To Rust
Rylan Bauermeister
Discover Medium
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage – with no ads in sight. Watch
Make Medium yours
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Become a member
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade
About
Help
Legal