How To Install Jitsi Meet on Ubuntu 18.04

How To Install Jitsi Meet on Ubuntu 18.04

×
Sign up for our newsletter.

Get the latest tutorials on SysAdmin and open source topics.

How To Install Jitsi Meet on Ubuntu 18.04
PostedApril 16, 2020 16.9k views Ubuntu 18.04

By Elliot Cooper

Become an author

The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program.
Introduction

Jitsi Meet is an open-source video-conferencing application based on WebRTC. A Jitsi Meet server provides multi-person video conference rooms that you can access using nothing more than your browser and provides comparable functionality to a Zoom or Skype conference call. The benefit of a Jitsi conference is that all your data only passes through your server, and the end-to-end TLS encryption ensures that no one can snoop on the call. With Jitsi you can be sure that your private information stays that way.

In this tutorial, you will install and configure a Jitsi Meet server on Ubuntu 18.04. The default configuration allows anyone to create a new conference room. This is not ideal for a server that is publicly available on the internet so you will also configure Jitsi Meet so that only registered users can create new conference rooms. After you have created the conference room, any users can join, as long as they have the unique address and the optional password.
Prerequisites

Before you begin this guide you’ll need the following:

One Ubuntu 18.04 server set up by following the Initial Server Setup with Ubuntu 18.04 tutorial, including a non-root sudo-enabled user. The size of the server you will need mostly depends on the available bandwidth and the number of participants you expect to be using the server. The following table will give you some idea of what is needed.
A domain name configured to point to your server. You can learn how to point domains to DigitalOcean Droplets by following the How To Set Up a Host Name with DigitalOcean tutorial. Throughout this guide, the example domain name jitsi.your-domain is used.

When you are choosing a server to run your Jitsi Meet instance you will need to consider the system resources needed to host conference rooms. The following benchmark information was collected from a single-core virtual machine using high-quality video settings:
CPU Server Bandwidth
Two Participants 3% 30Kbps Up, 100Kbps Down
Three Participants 15% 7Mbps Up, 6.5Mbps Down

The jump in resource use between two and three participants is because Jitsi will route the call data directly between the clients when there are two of them. When more than two clients are present then call data is routed through the Jitsi Meet server.
Step 1 — Setting the System Hostname

In this step, you will change the system’s hostname to match the domain name that you intend to use for your Jitsi Meet instance and resolve that hostname to the localhost IP, 127.0.0.1. Jitsi Meet uses both of these settings when it installs and generates its configuration files.

First, set the system’s hostname to the domain name that you will use for your Jitsi instance. The following command will set the current hostname and modify the /etc/hostname that holds the system’s hostname between reboots:

sudo hostnamectl set-hostname jitsi.your-domain

The command that you ran breaks down as follows:

hostnamectl is a utility from the systemd tool suite to manage the system hostname.
set-hostname sets the system hostname.

Check that this was successful by running the following:

hostname

This will return the hostname you set with the hostnamectl command:

Output
jitsi.your-domain

Next, you will set a local mapping of the server’s hostname to the loopback IP address, 127.0.0.1. Do this by opening the /etc/hosts file with a text editor:

sudo nano /etc/hosts

Then, add the following line:
/etc/hosts

127.0.0.1 jitsi.your-domain

Mapping your Jitsi Meet server’s domain name to 127.0.0.1 allows your Jitsi Meet server to use several networked processes that accept local connections from each other on the 127.0.0.1 IP address. These connections are authenticated and encrypted with a TLS certificate, which is registered to your domain name. Locally mapping the domain name to 127.0.0.1 makes it possible to use the TLS certificate for these local network connections.

Save and exit your file.

Your server now has the hostname that Jitsi requires for installation. In the next step, you will open the firewall ports that are needed by Jitsi and the TLS certificate installer.
Step 2 — Configuring the Firewall

When you followed the Initial Server Setup with Ubuntu 18.04 guide you enabled the UFW firewall and opened the SSH port. The Jitsi server needs some ports opened so that it can communicate with the call clients. Also, the TLS installation process needs to have a port open so that it can authenticate the certificate request.

The ports that you will open are the following:

80/tcp used in the TLS certificate request.
443/tcp used for the conference room creation web page.
4443/tcp,10000/udp used to transmit and receive the encrypted call traffic.

Run the following ufw commands to open these ports:

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 4443/tcp
sudo ufw allow 10000/udp

Check that they were all added with the ufw status command:

sudo ufw status

You will see the following output if these ports are open:

Output
Status: active

To Action From
— —— —-
OpenSSH ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
4443/tcp ALLOW Anywhere
10000/udp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
4443/tcp (v6) ALLOW Anywhere (v6)
10000/udp (v6) ALLOW Anywhere (v6)

The server is now ready for the Jitsi installation, which you will complete in the next step.
Step 3 — Installing Jitsi Meet

In this step, you will add the Jitsi stable repository to your server and then install the Jitsi Meet package from that repository. This will ensure that you are always running the latest stable Jitsi Meet package.

First, download the Jitsi GPG key with the wget downloading utility:

wget https://download.jitsi.org/jitsi-key.gpg.key

The apt package manager will use this GPG key to validate the packages that you will download from the Jitsi repository.

Next, add the GPG key you downloaded to apt’s keyring using the apt-key utility:

sudo apt-key add jitsi-key.gpg.key

You can now delete the GPG key file as it is no longer needed:

rm jitsi-key.gpg.key

Now, you will add the Jitsi repository to your server by creating a new source file that contains the Jitsi repository. Open and create the new file with your editor:

sudo nano /etc/apt/sources.list.d/jitsi-stable.list

Add this line to the file for the Jitsi repository:
/etc/apt/sources.list.d/jitsi-stable.list

deb https://download.jitsi.org stable/

Save and exit your editor.

Finally, perform a system update to collect the package list from the Jitsi repository and then install the jitsi-meet package:

sudo apt update
sudo apt install jitsi-meet

During the installation of jitsi-meet you will be prompted to enter the domain name (for example, jitsi.your-domain) that you want to use for your Jitsi Meet instance.

Image showing the jitsi-meet installation hostname dialog

Note: You move the cursor from the hostname field to highlight the button with the TAB key. Press ENTER when is highlighted to submit the hostname.

You will then be shown a new dialog box that asks if you want Jitsi to create and use a self-signed TLS certificate or use an existing one you already have:

Image showing the jitsi-meet installation certificate dialog

If you do not have a TLS certificate for your Jitsi domain select the first, Generate a new self-signed certificate, option.

Your Jitsi Meet instance is now installed using a self-signed TLS certificate. This will cause browser warnings, so you will get a signed TLS certificate in the next step.
Step 4 — Obtaining a Signed TLS Certificate

Jitsi Meet uses TLS certificates to encrypt the call traffic so that no one can listen to your call as it travels over the internet. TLS certificates are the same certificates that are used by websites to enable HTTPS URLs.

Jitsi Meet supplies a program to automatically download a TLS certificate for your domain name that uses the Certbot utility. You will need to install this program before you run the certificate installation script.

First, add the Certbot repository to your system to ensure that you have the latest version of Certbot. Run the following command to add the new repository and update your system:

sudo add-apt-repository ppa:certbot/certbot

Next, install the certbot package:

sudo apt install certbot

Your server is now ready to run the TLS certificate installation program provided by Jitsi Meet:

sudo /usr/share/jitsi-meet/scripts/install-letsencrypt-cert.sh

When you run the script you will be shown the following prompt for an email address:

Output
————————————————————————-
This script will:
– Need a working DNS record pointing to this machine(for domain jitsi.example.com)
– Download certbot-auto from https://dl.eff.org to /usr/local/sbin
– Install additional dependencies in order to request Let’s Encrypt certificate
– If running with jetty serving web content, will stop Jitsi Videobridge
– Configure and reload nginx or apache2, whichever is used
– Configure the coturn server to use Let’s Encrypt certificate and add required deploy hooks
– Add command in weekly cron job to renew certificates regularly

You need to agree to the ACME server’s Subscriber Agreement (https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf)
by providing an email address for important account notifications
Enter your email and press [ENTER]:

This email address will be submitted to the certificate issuer https://letsencrypt.org and will be used to notify you about security and other matters related to the TLS certificate. You must enter an email address here to proceed with the installation. The installation will then complete without any further prompts.

When it finishes, your Jitsi Meet instance will be configured to use a signed TLS certificate for your domain name. Certificate renewals will also happen automatically because the installer placed a renewal script at /etc/cron.weekly/letsencrypt-renew that will run each week.

The TLS installer used port 80 to verify you had control of your domain name. Now that you have obtained the certificate your server no longer needs to have port 80 open because port 80 is used for regular, non-encrypted HTTP traffic. Jitsi Meet only serves its website via HTTPS on port 443.

Close this port in your firewall with the following ufw command:

sudo ufw delete allow 80/tcp

Your Jitsi Meet server is now up and running and available for testing. Open a browser and point it to your domain name. You will be able to create a new conference room and invite others to join you.

The default configuration for Jitsi Meet is that anyone visiting your Jitsi Meet server homepage can create a new conference room. This will use your server’s system resources to run the conference room and is not desirable for unauthorized users. In the next step, you will configure your Jitsi Meet instance to only allow registered users to create conference rooms.
Step 5 — Locking Conference Creation

In this step, you will configure your Jitsi Meet server to only allow registered users to create conference rooms. The files that you will edit were generated by the installer and are configured with your domain name.

The variable your_domain will be used in place of a domain name in the following examples.

First, open sudo nano /etc/prosody/conf.avail/your_domain.cfg.lua with a text editor:

sudo nano /etc/prosody/conf.avail/your_domain.cfg.lua

Edit this line:
/etc/prosody/conf.avail/your_domain.cfg.lua


authentication = “anonymous”

To the following:
/etc/prosody/conf.avail/your_domain.cfg.lua


authentication = “internal_plain”

This configuration tells Jitsi Meet to force username and password authentication before allowing conference room creation by a new visitor.

Then, in the same file, add the following section to the end of the file:
/etc/prosody/conf.avail/your_domain.cfg.lua


VirtualHost “guest.your_domain”
authentication = “anonymous”
c2s_require_encryption = false

This configuration allows anonymous users to join conference rooms that were created by an authenticated user. However, the guest must have a unique address and an optional password for the room to enter it.

Here, you added guest. to the front of your domain name. For example, for jitsi.your-domain you would put guest.jitsi.your-domain. The guest. hostname is only used internally by Jitsi Meet. You will never enter it into a browser or need to create a DNS record for it.

Open another configuration file at /etc/jitsi/meet/your_domain-config.js with a text editor:

sudo nano /etc/jitsi/meet/your_domain-config.js

Edit this line:
/etc/jitsi/meet/your_domain-config.js


// anonymousdomain: ‘guest.example.com’,

To the following:
/etc/jitsi/meet/your_domain-config.js


anonymousdomain: ‘guest.your_domain’,

Again, by using the guest.your_domain hostname that you used earlier this configuration tells Jitsi Meet what internal hostname to use for the un-authenticated guests.

Next, open /etc/jitsi/jicofo/sip-communicator.properties:

sudo nano /etc/jitsi/jicofo/sip-communicator.properties

And add the following line to complete the configuration changes:
/etc/jitsi/jicofo/sip-communicator.properties

org.jitsi.jicofo.auth.URL=XMPP:your_domain

This configuration points one of the Jitsi Meet processes to the local server that performs the user authentication that is now required.

Your Jitsi Meet instance is now configured so that only registered users can create conference rooms. After a conference room is created, anyone can join it without needing to be a registered user. All they will need is the unique conference room address and an optional password set by the room’s creator.

Now that Jitsi Meet is configured to require authenticated users for room creation you need to register these users and their passwords. You will use the prosodyctl utility to do this.

Run the following command to add a user to your server:

sudo prosodyctl register user your_domain password

The user that you add here is not a system user. They will only be able to create a conference room and are not able to log in to your server via SSH.

Finally, restart the Jitsi Meet processes to load the new configuration:

sudo systemctl restart prosody.service
sudo systemctl restart jicofo.service
sudo systemctl restart jitsi-videobridge2.service

The Jitsi Meet instance will now request a username and password with a dialog box when a conference room is created.

Image showing the Jitsi username and password box

Your Jitsi Meet server is now set up and securely configured.
Conclusion

In this article, you deployed a Jitsi Meet server that you can use to host secure and private video conference rooms. You can extend your Jitsi Meet instance with instructions from the Jitsi Meet Wiki.

By Elliot Cooper

Editor: Kathryn Hancox

Was this helpful?
2
Report an issue
Related

Tutorial
How To Monitor BGP Announcements and Routes Using BGPalerter on Ubuntu 18.04

BGPalerter is an open-source BGP network monitoring tool that can …
Tutorial
How To Install Drupal with Docker Compose

Drupal is a content management system (CMS) written in PHP and distributed under the open-source GNU …
Tutorial
How To Install and Configure Elasticsearch on Ubuntu 18.04

Elasticsearch is a platform for distributed search and analysis of data in real time. This article will guide you …
Tutorial
How To Monitor Server Health with Checkmk on Ubuntu 18.04

Checkmk is a monitoring solution that is both robust and simpler to install than many of its competitors. …

Still looking for an answer?
Ask a question
Search for more help
2 Comments
Sign In to Comment

Marcozynos83 April 22, 2020

Hi, great guide. I followed your instructions.
it works all, but I don’t see the video and the chat. the video starts the webcam but I see black. the chat, I write the message but does not appear.
Reply Report

tech121694 about 9 hours ago

The guide looks good, but shouldn’t port 80 stay open for cert renewals?

eg. The guide states the following about closing port 80 after verifying control over the domain name:

“The TLS installer used port 80 to verify you had control of your domain name. Now that you have obtained the certificate your server no longer needs to have port 80 open because port 80 is used for regular, non-encrypted HTTP traffic.”

Unless the admin remembers to re-open it, won’t closing port 80 prevent the certbot weekly cron task from renewing the letsencrypt SSL certificate when it gets close to the SSL expiration date?

Cheers
Reply Report

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Become a contributor

You get paid; we donate to tech nonprofits.
Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.
COVID-19 Support Program

Working on something related to COVID-19? DigitalOcean would like to help.
Featured on Community
Kubernetes Course
Learn Python 3
Machine Learning in Python
Getting started with Go
Intro to Kubernetes
DigitalOcean Products
Droplets
Managed Databases
Managed Kubernetes
Spaces Object Storage
Marketplace
Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow – whether you’re running one virtual machine or ten thousand.
Learn More
DigitalOcean Cloud Control Panel

© 2020 DigitalOcean, LLC. All rights reserved.
Company

About
Leadership
Blog
Careers
Partners
Referral Program
Press
Legal & Security

Products

Products Overview
Pricing
Droplets
Kubernetes
Managed Databases
Spaces
Marketplace
Load Balancers
Block Storage
Tools & Integrations
API
Documentation
Release Notes

Community

Tutorials
Q&A
Tools and Integrations
Tags
Product Ideas
Meetups
Write for DOnations
Droplets for Demos
Hatch Startup Program
Shop Swag
Research Program
Open Source
Code of Conduct

Contact

Get Support
Trouble Signing In?
Sales
Report Abuse
System Status

AMD Radeon RX Vega 11

AMD Radeon RX Vega 11

HOME
REVIEWS
FORUMS
DOWNLOADS
CASE MOD GALLERY
DATABASES

OUR SOFTWARE

MORE

CONTACT US
Apr 29th, 2020 13:26 CEST change timezone
Search TechPowerUp
Sign in / Register
Latest VGA Drivers
NVIDIA GeForce 445.87 WHQL AMD Radeon Adrenalin 20.4.2 Beta
New Forum Posts
13:22 by P!nkpanther
SLI with different cards (3522)
12:56 by newtekie1
Problem with tech power up gpu database (1)
12:51 by Drone
Black Holes (519)
12:42 by Vayra86
Top TPU reviewed products placement. (3)
12:40 by lorry
NOCTUA in trouble? – Leo tests the NEW Zalman CNPS20X ! (151)
12:39 by Vayra86
What are you playing? (10303)
12:33 by blobster21
Ubuntu 20.04 dropped today (5)
12:24 by HTC
Attempting to make a graph in LibreOfficeCalc using an exponential Y-axis but i keep getting it wrong (3)
12:15 by AusWolf
Folding Pie and Milestones!! (7172)
12:10 by Zyll Goliath
Xeon Owners Club (5823)
Popular Reviews
Apr 27th, 2020
Upcoming Hardware Launches 2020 (Updated Apr 2020)
Mar 30th, 2020
Resident Evil 3 Benchmark Test & Performance Analysis – 27 Graphics Cards Compared
Mar 20th, 2019
AMD Ryzen Memory Tweaking & Overclocking Guide
Apr 10th, 2020
Silverstone SETA A1 Review
Apr 23rd, 2020
NVIDIA RTX Voice: Real-World Testing & Performance Review – It’s Like Magic
Apr 9th, 2020
Lian Li Strimer Plus 24-pin & 8-pin Review
Nov 4th, 2019
1usmus Custom Power Plan for Ryzen 3000 Zen 2 Processors
Apr 23rd, 2020
FSP T-WINGS CMT710 Review
Apr 16th, 2020
ID-Cooling SE 234 ARGB Review
Apr 9th, 2020
Antec Signature Titanium 1000 W Review – Return of the Legend
Controversial News Posts
AMD Ryzen 9 4900HS Torpedoes Intel’s Core i9 Mobile Lineup, Fastest Mobile Processor (211)
x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant? (134)
Ryzen 7 3700X Trades Blows with Core i7-10700, 3600X with i5-10600K: Early ES Review (95)
AMD Announces 3rd Gen Ryzen 3 Quad-Core Desktop Processors and AMD B550 PCIe 4.0 Chipset (81)
Intel Core i7-10700K and i5-10600K Geekbenched, Inch Ahead of 3800X and 3600X (80)
Windows 10 2004 Could Come Out in May (75)
Creative Rolls Out Sound BlasterX AE-5 Plus Gaming Sound Card (72)
Microsoft Edge Now 2nd Most Popular Web-Browser (72)
GPU Database Radeon RX Vega 11 Specs
AMD Radeon RX Vega 11
GRAPHICS PROCESSOR
Raven

CORES
704

TMUS
44

ROPS
8

MEMORY SIZE
System Shared

MEMORY TYPE
System Shared

BUS WIDTH
System Shared
Front GPU Chip
Recommended Gaming Resolutions: 640×480 1280×720 1366×768 1600×900 1920×1080 2560×1440 3840×2160
The Radeon RX Vega 11 is an integrated graphics solution by AMD, launched in February 2018. Built on the 14 nm process, and based on the Raven graphics processor, the device supports DirectX 12. This ensures that all modern games will run on Radeon RX Vega 11. It features 704 shading units, 44 texture mapping units, and 8 ROPs, . The GPU is operating at a frequency of 300 MHz, which can be boosted up to 1240 MHz.
Its power draw is rated at 65 W maximum.
Graphics Processor
GPU Name
Raven
Architecture
GCN 5.0
Foundry
GlobalFoundries
Process Size
14 nm
Transistors
4,940 million
Die Size
210 mm²
Integrated Graphics
Release Date
Feb 13th, 2018
Generation
Raven Ridge
(Vega)
Production
Active
Bus Interface
IGP
Relative Performance
8%GeForce 210
10%GeForce 9400 GT
13%Radeon HD 4550
17%Radeon HD 5450
19%Radeon HD 6450
20%GeForce GT 520
25%GeForce GT 220
33%GeForce GT 430
34%Radeon HD 5570
36%Radeon HD 4670
38%GeForce GT 440
39%GeForce GT 240
45%GeForce 9600 GT
48%Radeon HD 5670
51%GeForce GT 640
53%GeForce 9800 GT
55%Radeon HD 6670
57%Radeon HD 4830
61%Radeon HD 4770
64%Radeon HD 4850
66%GeForce GTS 250
67%GeForce GTS 450
70%Radeon HD 5750
80%Radeon HD 7750
80%Radeon HD 5770
82%GeForce GTX 260
83%Radeon HD 4870
83%Radeon Vega 8
84%GeForce GTX 550 Ti
85%GeForce GTX 650
90%Radeon HD 5830
94%Radeon HD 6790
96%Radeon HD 4890
99%GeForce GTX 460
100%Radeon RX Vega 11
101%GeForce GT 1030
102%GeForce GTX 275
102%Radeon HD 7770 GHz Edition
102%GeForce GTX 280
104%GeForce GTX 465
109%Radeon HD 6850
112%GeForce GTX 285
120%Radeon HD 5850
122%GeForce GTX 650 Ti
122%Radeon HD 7790
127%Radeon RX 550
127%GeForce GTX 470
132%Radeon HD 6870
141%Radeon HD 5870
145%GeForce GTX 560 Ti
146%Radeon HD 4870 X2
149%GeForce GTX 750 Ti
149%Radeon HD 6950
155%GeForce GTX 295
156%GeForce GTX 650 Ti Boost
158%Radeon HD 7850
160%GeForce GTX 480
168%Radeon HD 6970
171%Radeon R7 265
174%Radeon RX 460
174%Radeon R7 370
174%GeForce GTX 660
175%GeForce GTX 570
178%Radeon RX 560
189%GeForce GTX 950
191%Radeon HD 7870 GHz Edition
195%Radeon R9 270X
197%GeForce GTX 660 Ti
198%GeForce GTX 580
200%GeForce GTX 1050
200%Radeon HD 7950
204%Radeon HD 5970
210%GeForce GTX 760
226%GeForce GTX 670
232%GeForce GTX 960
238%Radeon R9 380
239%Radeon R9 285
242%Radeon HD 7970
244%GeForce GTX 680
250%GeForce GTX 1050 Ti
257%GeForce GTX 770
262%Radeon HD 6990
269%Radeon R9 280X
273%GeForce GTX 590
273%Radeon HD 7970 GHz Edition
302%GeForce GTX 780
313%GeForce GTX 1650
332%Radeon RX 470
337%Radeon R9 290
355%Radeon RX 570
360%Radeon R9 390
362%Radeon R9 290X
366%GeForce GTX 970
371%GeForce GTX TITAN
376%GeForce GTX 780 Ti
382%Radeon R9 390X
383%Radeon HD 7990
385%Radeon RX 480
388%GeForce GTX 690
399%GeForce GTX 1060 6 GB
404%Radeon RX 5500 OEM
409%Radeon RX 580
412%Radeon RX 5500 XT
418%GeForce GTX 980
421%GeForce GTX 1650 SUPER
423%Radeon R9 FURY
442%Radeon RX 590
465%Radeon R9 295X2
468%GeForce GTX 1660
473%Radeon R9 FURY X
481%GeForce GTX 980 Ti
494%GeForce GTX TITAN X
526%GeForce GTX 1660 SUPER
537%GeForce GTX 1660 Ti
537%GeForce GTX 1070
572%Radeon RX Vega 56
607%GeForce GTX 1070 Ti
624%Radeon RX Vega 64
633%GeForce GTX 1080
633%GeForce RTX 2060
659%Radeon RX 5700
711%GeForce RTX 2060 SUPER
737%GeForce RTX 2070
754%Radeon RX 5700 XT
763%Radeon VII
782%TITAN X Pascal
806%GeForce GTX 1080 Ti
815%GeForce RTX 2070 SUPER
867%GeForce RTX 2080
919%GeForce RTX 2080 SUPER
1014%GeForce RTX 2080 Ti
100%Radeon RX Vega 11
Based on TPU review data: “Performance Summary” at 1920×1080
Clock Speeds
Base Clock
300 MHz
Boost Clock
1240 MHz
Memory Clock
System Shared
Memory
Memory Size
System Shared
Memory Type
System Shared
Memory Bus
System Shared
Bandwidth
System Dependent
Render Config
Shading Units
704
TMUs
44
ROPs
8
Compute Units
11
Theoretical Performance
Pixel Rate
9.920 GPixel/s
Texture Rate
54.56 GTexel/s
FP16 (half) performance
3.492 TFLOPS (2:1)
FP32 (float) performance
1.746 TFLOPS
FP64 (double) performance
109.1 GFLOPS (1:16)
Board Design
Slot Width
IGP
TDP
65 W
Outputs
No outputs
Power Connectors
None
Graphics Features
DirectX
12 (12_1)
OpenGL
4.6
OpenCL
2.0
Vulkan
1.2.131
Shader Model
6.4
Card Notes
Ryzen 5 2400G
Raven GPU Notes
Architecture Codename: Arctic Islands
CLRX Version: GCN 1.4
Graphics/Compute: GFX9 (gfx902) / (gfx903)
Display Core Next: 1.0
Video Core Next: 1.0
Apr 29th, 2020 13:26 CEST change timezone
Search TechPowerUp
Sign in / Register
Latest VGA Drivers
NVIDIA GeForce 445.87 WHQL AMD Radeon Adrenalin 20.4.2 Beta

New Forum Posts
13:22 by P!nkpanther
SLI with different cards (3522)
12:56 by newtekie1
Problem with tech power up gpu database (1)
12:51 by Drone
Black Holes (519)
12:42 by Vayra86
Top TPU reviewed products placement. (3)
12:40 by lorry
NOCTUA in trouble? – Leo tests the NEW Zalman CNPS20X ! (151)
12:39 by Vayra86
What are you playing? (10303)
12:33 by blobster21
Ubuntu 20.04 dropped today (5)
12:24 by HTC
Attempting to make a graph in LibreOfficeCalc using an exponential Y-axis but i keep getting it wrong (3)
12:15 by AusWolf
Folding Pie and Milestones!! (7172)
12:10 by Zyll Goliath
Xeon Owners Club (5823)

Popular Reviews
Apr 27th, 2020
Upcoming Hardware Launches 2020 (Updated Apr 2020)
Mar 30th, 2020
Resident Evil 3 Benchmark Test & Performance Analysis – 27 Graphics Cards Compared
Mar 20th, 2019
AMD Ryzen Memory Tweaking & Overclocking Guide
Apr 10th, 2020
Silverstone SETA A1 Review
Apr 23rd, 2020
NVIDIA RTX Voice: Real-World Testing & Performance Review – It’s Like Magic
Apr 9th, 2020
Lian Li Strimer Plus 24-pin & 8-pin Review
Nov 4th, 2019
1usmus Custom Power Plan for Ryzen 3000 Zen 2 Processors
Apr 23rd, 2020
FSP T-WINGS CMT710 Review
Apr 16th, 2020
ID-Cooling SE 234 ARGB Review
Apr 9th, 2020
Antec Signature Titanium 1000 W Review – Return of the Legend
Controversial News Posts
AMD Ryzen 9 4900HS Torpedoes Intel’s Core i9 Mobile Lineup, Fastest Mobile Processor (211)
x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant? (134)
Ryzen 7 3700X Trades Blows with Core i7-10700, 3600X with i5-10600K: Early ES Review (95)
AMD Announces 3rd Gen Ryzen 3 Quad-Core Desktop Processors and AMD B550 PCIe 4.0 Chipset (81)
Intel Core i7-10700K and i5-10600K Geekbenched, Inch Ahead of 3800X and 3600X (80)
Windows 10 2004 Could Come Out in May (75)
Creative Rolls Out Sound BlasterX AE-5 Plus Gaming Sound Card (72)
Microsoft Edge Now 2nd Most Popular Web-Browser (72)
Copyright © 2004-2020 http://www.techpowerup.com. All rights reserved.
All trademarks used are properties of their respective owners.

CS Visualized: Useful Git Commands

CS Visualized: Useful Git Commands

dev.to
🌳🚀 CS Visualized: Useful Git Commands
Lydia Hallie
10-12 minutes

Although Git is a very powerful tool, I think most people would agree when I say it can also be… a total nightmare 😐 I’ve always found it very useful to visualize in my head what’s happening when working with Git: how are the branches interacting when I perform a certain command, and how will it affect the history? Why did my coworker cry when I did a hard reset on master, force pushed to origin and rimraf’d the .git folder?

I thought it would be the perfect use case to create some visualized examples of the most common and useful commands! 🥳 Many of the commands I’m covering have optional arguments that you can use in order to change their behavior. In my examples, I’ll cover the default behavior of the commands without adding (too many) config options! 😄
Merging

Having multiple branches is extremely convenient to keep new changes separated from each other, and to make sure you don’t accidentally push unapproved or broken changes to production. Once the changes have been approved, we want to get these changes in our production branch!

One way to get the changes from one branch to another is by performing a git merge! There are two types of merges Git can perform: a fast-forward, or a no-fast-forward 🐢

This may not make a lot of sense right now, so let’s look at the differences!
Fast-forward (–ff)

A fast-forward merge can happen when the current branch has no extra commits compared to the branch we’re merging. Git is… lazy and will first try to perform the easiest option: the fast-forward! This type of merge doesn’t create a new commit, but rather merges the commit(s) on the branch we’re merging right in the current branch 🥳

Perfect! We now have all the changes that were made on the dev branch available on the master branch. So, what’s the no-fast-forward all about?
No-fast-foward (–no-ff)

It’s great if your current branch doesn’t have any extra commits compared to the branch that you want to merge, but unfortunately that’s rarely the case! If we committed changes on the current branch that the branch we want to merge doesn’t have, git will perform a no-fast-forward merge.

With a no-fast-forward merge, Git creates a new merging commit on the active branch. The commit’s parent commits point to both the active branch and the branch that we want to merge!

No big deal, a perfect merge! 🎉 The master branch now contains all the changes that we’ve made on the dev branch.
Merge Conflicts

Although Git is good at deciding how to merge branches and add changes to files, it cannot always make this decision all by itself 🙂 This can happen when the two branches we’re trying to merge have changes on the same line in the same file, or if one branch deleted a file that another branch modified, and so on.

In that case, Git will ask you to help decide which of the two options we want to keep! Let’s say that on both branches, we edited the first line in the README.md.

If we want to merge dev into master, this will end up in a merge conflict: would you like the title to be Hello! or Hey!?

When trying to merge the branches, Git will show you where the conflict happens. We can manually remove the changes we don’t want to keep, save the changes, add the changed file again, and commit the changes 🥳

Yay! Although merge conflicts are often quite annoying, it makes total sense: Git shouldn’t just assume which change we want to keep.
Rebasing

We just saw how we could apply changes from one branch to another by performing a git merge. Another way of adding changes from one branch to another is by performing a git rebase.

A git rebase copies the commits from the current branch, and puts these copied commits on top of the specified branch.

Perfect, we now have all the changes that were made on the master branch available on the dev branch! 🎊

A big difference compared to merging, is that Git won’t try to find out which files to keep and not keep. The branch that we’re rebasing always has the latest changes that we want to keep! You won’t run into any merging conflicts this way, and keeps a nice linear Git history.

This example shows rebasing on the master branch. In bigger projects, however, you usually don’t want to do that. A git rebase changes the history of the project as new hashes are created for the copied commits!

Rebasing is great whenever you’re working on a feature branch, and the master branch has been updated. You can get all the updates on your branch, which would prevent future merging conflicts! 😄
Interactive Rebase

Before rebasing the commits, we can modify them! 😃 We can do so with an interactive rebase. An interactive rebase can also be useful on the branch you’re currently working on, and want to modify some commits.

There are 6 actions we can perform on the commits we’re rebasing:

reword: Change the commit message
edit: Amend this commit
squash: Meld commit into the previous commit
fixup: Meld commit into the previous commit, without keeping the commit’s log message
exec: Run a command on each commit we want to rebase
drop: Remove the commit

Awesome! This way, we can have full control over our commits. If we want to remove a commit, we can just drop it.

Alt Text

Or if we want to squash multiple commits together to get a cleaner history, no problem!

Alt Text

Interactive rebasing gives you a lot of control over the commits you’re trying to rebase, even on the current active branch!
Resetting

It can happen that we committed changes that we didn’t want later on. Maybe it’s a WIP commit, or maybe a commit that introduced bugs! 🐛 In that case, we can perform a git reset.

A git reset gets rid of all the current staged files and gives us control over where HEAD should point to.
Soft reset

A soft reset moves HEAD to the specified commit (or the index of the commit compared to HEAD), without getting rid of the changes that were introduced on the commits afterward!

Let’s say that we don’t want to keep the commit 9e78i which added a style.css file, and we also don’t want to keep the commit 035cc which added an index.js file. However, we do want to keep the newly added style.css and index.js file! A perfect use case for a soft reset.

When typing git status, you’ll see that we still have access to all the changes that were made on the previous commits. This is great, as this means that we can fix the contents of these files and commit them again later on!
Hard reset

Sometimes, we don’t want to keep the changes that were introduced by certain commits. Unlike a soft reset, we shouldn’t need to have access to them any more. Git should simply reset its state back to where it was on the specified commit: this even includes the changes in your working directory and staged files! 💣

Alt Text

Git has discarded the changes that were introduced on 9e78i and 035cc, and reset its state to where it was on commit ec5be.
Reverting

Another way of undoing changes is by performing a git revert. By reverting a certain commit, we create a new commit that contains the reverted changes!

Let’s say that ec5be added an index.js file. Later on, we actually realize we didn’t want this change introduced by this commit anymore! Let’s revert the ec5be commit.

Alt Text

Perfect! Commit 9e78i reverted the changes that were introduced by the ec5be commit. Performing a git revert is very useful in order to undo a certain commit, without modifying the history of the branch.
Cherry-picking

When a certain branch contains a commit that introduced changes we need on our active branch, we can cherry-pick that command! By cherry-picking a commit, we create a new commit on our active branch that contains the changes that were introduced by the cherry-picked commit.

Say that commit 76d12 on the dev branch added a change to the index.js file that we want in our master branch. We don’t want the entire we just care about this one single commit!

Alt Text

Cool, the master branch now contains the changes that 76d12 introduced!
Fetching

If we have a remote Git branch, for example a branch on Github, it can happen that the remote branch has commits that the current branch doesn’t have! Maybe another branch got merged, your colleague pushed a quick fix, and so on.

We can get these changes locally, by performing a git fetch on the remote branch! It doesn’t affect your local branch in any way: a fetch simply downloads new data.

Alt Text

We can now see all the changes that have been made since we last pushed! We can decide what we want to do with the new data now that we have it locally.
Pulling

Although a git fetch is very useful in order to get the remote information of a branch, we can also perform a git pull. A git pull is actually two commands in one: a git fetch, and a git merge. When we’re pulling changes from the origin, we’re first fetching all the data like we did with a git fetch, after which the latest changes are automatically merged into the local branch.

Alt Text

Awesome, we’re now perfectly in sync with the remote branch and have all the latest changes! 🤩
Reflog

Everyone makes mistakes, and that’s totally okay! Sometimes it may feel like you’ve screwed up your git repo so badly that you just want to delete it entirely.

git reflog is a very useful command in order to show a log of all the actions that have been taken! This includes merges, resets, reverts: basically any alteration to your branch.

Alt Text)

If you made a mistake, you can easily redo this by resetting HEAD based on the information that reflog gives us!

Say that we actually didn’t want to merge the origin branch. When we execute the git reflog command, we see that the state of the repo before the merge is at HEAD@{1}. Let’s perform a git reset to point HEAD back to where it was on HEAD@{1}!

Alt Text

We can see that the latest action has been pushed to the reflog!

Git has so many useful porcelain and plumbing commands, I wish I could cover them all! 😄 I know there are many other commands or alterations that I didn’t have time for to cover right now – let me know what your favorite/most useful commands are, and I may cover them in another post!

And as always, feel free to reach out to me! 😊

Can’t File for Unemployment? Don’t Blame Cobol

Can’t File for Unemployment? Don’t Blame Cobol

SUBSCRIBE

KLINT FINLEY04.22.20 8:00 AM
BUSINESS
Can’t File for Unemployment? Don’t Blame Cobol
Yes, the 60-year-old programming language still powers banks, airlines, and government agencies. But a more likely cause for those error messages was overloaded web servers.
New Jersey Governor Phil Murphy during an interview
New Jersey Governor Phil Murphy put out a call for Cobol programmers.
PHOTOGRAPH: RON ANTONELLI/BLOOMBERG/GETTY IMAGES
Programming languages don’t often make national headlines. But New Jersey governor Phil Murphy’s plea earlier this month for developers familiar with the 60-year-old programming language Cobol to help the state process unemployment claims garnered a lot of attention.

Many states have struggled with the unprecedented surge in claims for jobless benefits, which reached 10 times the previous record. But aging computing infrastructure hasn’t helped. Cobol (short for “common business-oriented language”) is old, having been introduced in 1959, before the internet and personal computers were invented. It was a short hop to conclude that New Jersey’s troubles stemmed at least in part from relying on such an ancient language.

But experts say that Cobol probably isn’t to blame for the problems in New Jersey and other states. Cobol is typically used for back-office tasks like processing forms and payments, not for public-facing websites. Errors shown in screenshots of the New Jersey unemployment insurance website were related to Java, a robust programming platform used by the likes of Amazon and Google. In other words, people might be hitting a wall before their claim ever touches a system running Cobol.

The programming language isn’t a problem in and of itself. Many businesses and governments still use Cobol. If you’ve booked a flight, paid for something with a credit card, or received a direct deposit, chances are you’ve interacted with a system powered by Cobol. These applications are often decades old, but according to Gartner analyst Thomas Klinect, they’re fast, reliable, and secure. It would make little sense to spend time and money rewriting crucial business and government software systems into newer languages.

“Cobol isn’t cool, but businesses don’t care about what’s cool,” Klinect says. “They care about what works.”

Clearly, many state unemployment sites are not working, or are not working well. But that may have more to do with aging hardware than an aging programming language. Murphy said some of New Jersey’s computer systems were more than 40 years old. Cobol is best known for running on older mainframes, but it also can run on newer hardware or on more modern mainframes sold by companies like IBM.

“Cobol isn’t cool, but businesses don’t care about what’s cool. They care about what works.”

Thomas Klinect, Gartner analyst

The New Jersey Office of Information Technology didn’t answer specific questions about what technologies it uses, but the unemployment insurance service isn’t based on a single technology system, the state’s director of technology Julie Garland Veffer said in a statement. “Different components operate together, such as web servers, application servers, mainframes, and special databases,” said Veffer. “Some of these systems, unlike modern applications or cloud-hosted computing, cannot quickly or readily scale upward.”

Barry Baker, VP of software of IBM’s Z line of mainframes, says that although he can’t name specific customers, the company has been working with states to help them handle an influx of unemployment claims during the pandemic. “A good number of states said they were fine,” he adds. “Some were saying they were going to see a surge in applications and workload, and we just help them grow their systems to process more volume.”

Updating Systems Quickly
If Cobol isn’t the likely choke point for the state’s unemployment system, why did Governor Murphy say the state needs Cobol developers? Klinect at Gartner says it could be the need to update the system for the emergency relief bill passed by Congress, known as the CARES act—which makes more workers eligible for benefits, increases the payouts, and extends the period workers can receive them. State IT departments typically might have months or years to rewrite software to support such changes. “Suddenly they need to cram a year’s worth of work into the next two hours,” he explains.

Relatively few programmers know Cobol, compared with modern languages. Donald F. Ferguson, a professor of computer science at Columbia University and cofounder and CTO of video-streaming company Seeka TV, says programmers tend to favor languages such as C, Java, Ruby, and Python because they make it easy to reuse packages of code in different applications. Entire ecosystems of open-source code libraries emerged around these languages, saving developers from repetitive tasks. Until relatively recently, it was difficult to create these sorts of reusable modules in Cobol, contributing to its reputation as an outdated language; Ferguson says modern Cobol tools now makes that task easier.

Apart from the pandemic, there’s still demand for developers to add new features to older systems or write software that links back-office Cobol systems to the web. Baker says IBM and the Linux Foundation’s Open Mainframe Project partner with more than 4,500 high schools and colleges to create Cobol and mainframe technology programs for students. Open Mainframe Project director John Mertic says graduates of these programs go on to lucrative careers at banks, insurance companies, and other organizations that still use Cobol.

The New Jersey Office of Information Technology website doesn’t list any job openings, for Cobol programmers or anyone else. Rather, it’s seeking volunteers to help it meet its challenges. In other words, it’s asking people who might have high-paying jobs elsewhere to work for free. Ensuring that people can file for unemployment during the pandemic is a worthy cause. But it’s easy to see why the talent to do it might be scarce.

More From WIRED on Covid-19
In one hospital, finding humanity in an inhuman crisis
How is the coronavirus pandemic affecting climate change?
What does Covid-19 do to your brain?
An oral history of the pandemic warnings Trump ignored
FAQs: All your Covid-19 questions, answered
Read all of our coronavirus coverage here
TOPICS:PROGRAMMINGCORONAVIRUSGOVERNMENT

RELATED VIDEO

GET OUR NEWSLETTER
Enter your email
SUBMIT

Use of this site constitutes acceptance of our user agreement (effective 3/21/12) and privacy policy (effective 3/21/12). Affiliate link policy. Your California privacy rights. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.

CNMN Collection

×
Get WIRED. The Future Is Already Here.
GET UNLIMITED ACCESS.
Subscribe Now.

Already a subscriber?Sign In

Guillermo Rauch Blog

Guillermo Rauch Blog

rauchg.com

Writings
Email
Twitter
Source

April 21, 2020Vercel
January 2, 20202019 in Review
March 28, 2017It’s Hard to Forego Efficiency
January 6, 20172016 in Review
February 4, 2016Addressable Errors
July 13, 2015Pure UI
February 22, 2015ECMAScript 6
November 4, 20147 Principles of Rich Web Applications

Visualizing from Worldometer

Visualizing from Worldometer

Data last updated a minute ago by Worldometers.

This website was developed by Navid Mamoon (@navidmx) and Gabriel Rasskin, (@gabrielrasskin) two students at Carnegie Mellon University.

The goal of this project is to provide a simple, interactive way to visualize the impact of COVID-19. We wanted people to be able to see this as something that brings us all together. It’s not one country, or another country; it’s one planet – and this is what our planet looks like today.

The data is from Worldometer’s real-time updates, utilizing reliable sources from around the world. The TODAY cases/deaths are based on GMT (+0). The website pulls new data every 2 minutes, refresh to see any changes.

If you have questions, suggestions, or feedback, please send us an email! We also have a Facebook page, so be sure to like and follow for future updates as we take this project further.

With over 70 million users, servers and maintenance costs can be high. We appreciate any help.

zdnet.com: Palm, a Silicon Valley soap opera

zdnet.com: Palm, a Silicon Valley soap opera

EDITION: EU
Search
MUST READ: To keep their Curiosity Rover exploring Mars, NASA’s team has switched to remote working
Palm, a Silicon Valley soap opera
HP just bought struggling smartphone maker Palm, Inc. for a whopping $1.2 billion dollars. What’s it all mean? To answer that, it’s best to understand the strange, winding, wacky story that is Palm.

David Gewirtz
By David Gewirtz for ZDNet Government | April 29, 2010 — 07:18 GMT (08:18 BST) | Topic: Enterprise Software

HP just bought struggling smartphone maker Palm, Inc. for a whopping $1.2 billion dollars. What’s it all mean? To answer that, it’s best to understand the strange, winding, wacky story that is Palm.

Before I get started, let me share with you my bona fides. I was Editor-in-Chief of PalmPower Magazine, the largest independent publication devoted to Palm products, from 1998 through 2002. I also headed up PalmPower’s Enterprise Edition, which was a publication funded by Palm to explore the enterprise uses of Palm handhelds.

I spent a lot of my time following Palm, excited by Palm, and then dismayed by Palm and its very strange moves.

Next: A company in search of an identity »

A company in search of an identity

For a company with such a clear and compelling concept (i.e., we make computers that’ll fit in the palm of your hand), Palm has always struggled with who it was as a company and who its customers were.

It was originally founded by Jeff Hawkins, who chose the original size of the Pilot because it would fit in a shirt pocket. Remember, this was before even cell phones were that small.

Even the Pilot name didn’t survive the drama. Pilot Pen Corporation, the company that makes pens, decided that the handheld Pilot computer was infringing on their trademark. Palm had built substantial branding for their Pilot 1000 and Pilot 5000 models, and by 1997 was forced to change the name to PalmPilot and by 1998, to Palm.

Jeff’s Palm Computing got bought by U.S. Robotics. It then got bought by 3Com, so you had a company where half of the business was focused on consumer electronics and half was an old-school networking company. Management didn’t always understand what consumers wanted.

In 1998, Palm’s founders had enough and left the company. But in one of the most bizarre acts of corporate spin-offs ever, the founders started Handspring, which effectively cloned the Palm handheld. So now you had Palm and you had Handspring, run by the founders of Palm.

In 2000, 3Com spun Palm Computing out as an independent subsidiary, creating Palm, Inc. The company IPO’d with shares at $95, but within a year, the share value had plummeted to about six bucks.

Next: Handspring »

Handspring went on to create the first Palm OS phones, and the Treo brand. But by 2003, Palm itself was moribund and to revitalize the company, they brought Jeff and Donna back, merging Handspring back into Palm. In a fit of complete weirdness, Palm then spun out the Palm OS operating system as a separate company, called PalmSource and renamed the merged Palm and Handspring as palmOne.

Are you keeping up here? So now, we had PalmSource, which owned the Palm OS operating system and palmOne (with a lower-case ‘p’) which was the combined hardware operations of Handspring and Palm. Oh, and in case it wasn’t weird enough, Palm bought Jean-Louis Gassée’s BeOS (which had some strong multimedia components) for $11 million in 2001. Palm never did anything with the acquisition.

It gets weirder. Apparently, when the old Palm spun out PalmSource, they gave PalmSource the right to the Palm trademark. So, for palmOne to use the name Palm (which they had, originally), palmOne had to pay PalmSource $30 million. Seriously.

In 2005, a Japanese company, ACCESS, decided to buy PalmSource, effectively leaving Palm, Inc. without its operating system — the operating system it designed and developed. That’s ok, because Palm wrote ACCESS a check for $44 million for the rights to the Palm OS source code, the build called “garnet”.

Even though Palm had a huge audience for its Palm OS handhelds, the Palm OS itself was getting a little long in the tooth. Palm decided to license Windows Mobile and introduced its first Windows Mobile handset, the Treo 700w, further muddying the waters that was the Palm brand, especially since Palm had long positioned itself as better than Pocket PC and Windows Mobile.

Then there was the Foleo, the “what the frak were they thinking?” over-priced, almost-a-netbook idea that was announced in 2007, but cancelled before it was released.

Next: Pre thoughts and caviar dreams »

Pre thoughts and caviar dreams

That all brings us pretty much up to date except for, oh, the screwing of its developers. Palm had a wildly loyal developer base, with thousands of high-quality Palm OS applications and hundreds of companies making a good living innovating on the Palm OS platform.

So what does Palm do? Cut them off. After all, anyone who actually wanted to do business with Palm clearly wasn’t good enough for Palm.

When the Palm Pre came out, almost no Palm OS developers were given the opportunity to develop for the new WebOS. And not only were their best developers not courted, when TealPoint created a skin for old-school Palms that made the launcher look a little like the Pre, Palm sued them, forcing the product off the market.

TealPoint, for the record, was one of the most prolific developers of extremely high-quality Palm OS products. Other Palm developers didn’t get sued, but they didn’t get courted either. Palm, according to most then-Palm developers, didn’t want anything to do with “that old thing”. Palm wanted to start fresh — and that meant a newer, better class of developers.

Palm has always had a strange desire to be Apple and to provide premium, BMW-class products — rather than simple Fords or Chevys. That could be because a few of their CEOs either came from Apple or made Apple add-on products, but no matter what, there’s always been this strange aspiratonal identity crisis. The company made great products for the “everyman” consumer, but tried to constantly position the products as if they were for the luxury elite.

Next: HP acquisition »

HP acquisition

All of this brings us back to the HP acquisition of Palm for a truly insane $1.2 billion dollars. In my opinion, HP paid $1,195,000,000 too much.

Let’s look at Palm’s assets. First, there’s the loyal audience of millions who have been using Palm OS devices for years. What? Oh, they’re gone because Palm discontinued the Palm Desktop and said, “see ya, wouldn’t want to be ya.”

OK, nevermind.

Next, there’s the loyal developer community, who’s been building exceptional Palm OS applications for years. What? Oh, they’re gone because Palm did everything they could to get the stink of those old developers off the bottom of their corporate shoes.

OK, nevermind.

There’s the company’s webOS, because there’s no other well-integrated operating system for mobile devices and phones. What? Oh, there’s Android, the iPhone OS, and even the new Windows 7 mobile operating system, plus, of course, Symbian and BlackBerry. WebOS is nice, but is it worth $1.2 billion?

OK, nevermind. Maybe, maybe webOS is worth $5 million. Maybe.

And then there’s Palm’s relationship with carriers. Well, there’s something to be said for that, but HP already has carrier relationships for its iPAQ phones — and while some iPAQs are now getting a little out-of-date, they’re still strong contenders.

What else might HP get? The management team? Seriously? Have you seen how Palm’s management team managed Palm? What about the webOS engineers? Well, if HP had waited a few months, Palm would have imploded, and those engineers could have been picked up for a mere recruiting fee.

Is there any upside at all?

Look, webOS is a fine, little OS. It’d be cool to see an iPad-like device running webOS. And it’d be nice to see more phones, from a more reliable company, sporting webOS as an alternative.

But there’s nothing compelling here, nothing that gives HP an advantage they couldn’t have otherwise gotten from, say, Android. And it’s going to take a whole lot of sales to make up for the $1.2 billion boondoggle that is HP’s purchase of Palm.

It’s sad, really.

The PalmPower archives are still up. If you want a tour through Palm’s strange past as it happened, read HP buys Palm, a Palm retrospective.

Read also: Did HP save Palm with acquisition? Or did it save itself? News to know: HP-Palm; Microsoft-HTC; Gizmodo iPhone; earnings webOS 1.4.1.1 update may have fixed two major Pre Plus problems Palm-HP: Microsoft bites bigtime HP forks out $1.2 billion for struggling Palm – Money well spent? Will HP/Palm be the enterprise challenger to RIM? HP Slate with webOS: The potential iPad rival from HP’s acquisition of Palm

RELATED TOPICS: HARDWARE CLOUD BIG DATA ANALYTICS INNOVATION TECH AND WORK COLLABORATION
David Gewirtz
By David Gewirtz for ZDNet Government | April 29, 2010 — 07:18 GMT (08:18 BST) | Topic: Enterprise Software