Sunday, 03 May

11:34

Could Open-Source Medicine Prepare Us For The Next Pandemic? [Slashdot]

"A new, Linux-like platform could transform the way medicine is developed — and energize the race against COVID-19," reports Fast Company, while arguing that the old drug discovery system "was built to benefit shareholders, not patients." Fast Company's technology editor harrymcc writes: Drug development in the U.S. has traditionally been cloistered and profit-motivated, which means that it has sometimes failed to tackle pressing needs. But an initiative called the Open Source Pharma Foundation hopes to apply some of the lessons of open-source software to the creation of new drugs — including ones that could help fight COVID-19. From the article: The response to COVID-19 has been more open-source than any drug effort in modern memory. On January 11, less than two weeks after the virus was reported to the World Health Organization, Chinese researchers published a draft of the virus's genetic sequence. The information enabled scientists across the globe to begin developing tests, treatments, and vaccines. Pharmaceutical companies searched their archives for drugs that might be repurposed as treatments for COVID-19 and formed consortiums to combine resources and expedite the process. These efforts have yielded some 90 vaccine candidates, seven of which are in Phase I trials and three of which are advancing to Phase II. There are nearly 1,000 clinical trials listed with the Centers for Disease Control and Prevention related to COVID-19. The gathering of resources and grassroots sharing of information aimed at combating the coronavirus has put open-source methods of drug development front and center. "It's our moment," said Bernard Munos, a former corporate strategist at pharma company Eli Lilly... Munos has been arguing for an open-source approach to developing drugs since 2006. "A lot is at stake because if it's successful, the open-source model can be replicated to address other challenges in biomedical research." So now the Open Source Pharma Foundation hopes to offer "a platform where scientists and researchers can freely access technological tools for researching disease, share their discoveries, launch investigations into molecules or potential drugs, and find entities to turn that research into medicine..." according to the article. "If the platform succeeds, it would allow drugs to succeed on their merit and need, rather than their ability to be profitable."

Read more of this story at Slashdot.

10:34

20 Years Later, Creator of World's First Major Computer Virus Located in Manila [Slashdot]

"The man behind the world's first major computer virus outbreak has admitted his guilt, 20 years after his software infected millions of machines worldwide," reports the BBC: Filipino Onel de Guzman, now 44, says he unleashed the Love Bug computer worm to steal passwords so he could access the internet without paying. He claims he never intended it to spread globally. And he says he regrets the damage his code caused. "I didn't expect it would get to the US and Europe. I was surprised," he said in an interview for Crime Dot Com, a forthcoming book on cyber-crime. The Love Bug pandemic began on 4 May, 2000. Victims received an email attachment entitled LOVE-LETTER-FOR-YOU. It contained malicious code that would overwrite files, steal passwords, and automatically send copies of itself to all contacts in the victim's Microsoft Outlook address book. Within 24 hours, it was causing major problems across the globe, reportedly infecting 45 million machines... He claims he initially sent the virus only to Philippine victims, with whom he communicated in chat rooms, because he only wanted to steal internet access passwords that worked in his local area. However, in spring 2000 he tweaked the code, adding an auto-spreading feature that would send copies of the virus to victims' Outlook contacts using a flaw in Microsoft's Windows 95 operating system. "It's not really a virus," wrote CmdrTaco back on May 4, 2000. "It's a trojan that proclaims its love for the recipient and requests that you open its attachment. On a first date even! It then loves you so much that it sends copies of itself to everyone in your address book and starts destroying files on your drive... "Pine/Elm/Mutt users as always laugh maniacally as the trojan shuffles countless wasted packets over saturated backbones filling overworked SMTP servers everywhere. Sysadmins are seen weeping in the alleys."

Read more of this story at Slashdot.

10:02

Saturday Morning Breakfast Cereal - Can [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Dad, you have never appreciated my conditional probabilities!


Today's News:

09:34

After the Pandemic, Will Big Tech Companies Be Unstoppable? [Slashdot]

After the pandemic is over, "The tech giants could have all the power," warns Recode co-founder Kara Swisher, " and absolutely none of the accountability — at least all the power that will truly matter." This is the conclusion that many are coming to as the post-pandemic future begins to come into focus. Wall Street sure is signaling that the power lies with tech companies, vaulting the stock of Amazon to close to $2,400 a share earlier this week, from $1,838 at the end of January. While the price fell on Friday to $2,286 a share, after Amazon's chief executive, Jeff Bezos, said he would spend future profits on the coronavirus response, that still gives the company a value of $1.14 trillion. And all the other Big Tech stocks, which were hit in the first weeks of the pandemic, also are on an upward march to the top of the market-cap heap. Microsoft at $1.32 trillion. Apple at $1.26 trillion. Alphabet at $900 billion. And Facebook at $577 billion. This group now makes up just over 20 percent of the S.&P. 500, which is a flashing yellow signal of what is to come. That is to say, we live in a country in which the very big tech firms will be the very big winners in the economy of the future, which still does not look like it will be so pretty for most people and many companies, too... I neither hate tech nor think most people who work in tech are bad people. But when this crisis is over, I can say that we most certainly should fear Big Tech more because these companies will be freer than ever, with many fewer strictures on them from regulators and politicians. The effort to rein in tech companies had been building decent momentum before coronavirus outbreak, but it will be harder when focus needs to be on building up rather than breaking apart. Now, as we turn to the healthy companies to help us revive the economy, it could be that the only ones with real immunity are the tech giants. In this way, Covid-19 has accelerated their rise and tightened their grip on our lives. And this consolidation of power, combined with Big Tech's control of data, automation, robotics, artificial intelligence, media, advertising, retail and even autonomous tech, is daunting.

Read more of this story at Slashdot.

08:34

Beware of Emails Impersonating 'Microsoft Teams' Notifications [Slashdot]

Researchers at the email security company Abnormal Security have discovered "a multi-prong Microsoft Teams impersonation attack" involving "convincingly-crafted emails impersonating the automated notification emails from Microsoft Teams," reports Forbes: The aim, simply to steal employee Microsoft Office 365 login credentials. To date, the researchers report that as many as 50,000 users have been subject to this attack as of May 1. This is far from your average phishing scam, however, and comes at precisely the right time to fool already stressed and somewhat disoriented workers. Instead of the far more commonly used "sort of look-alike" alerts and notifications employed by less careful cybercriminals, this new campaign is very professional in approach. "The landing pages that host both attacks look identical to the real webpages, and the imagery used is copied from actual notifications and emails from this provider," the researchers said. The attackers are also using newly-registered domains that are designed to fool recipients into thinking the notifications are from an official source... As far as the credential-stealing payload is concerned, this is delivered in an equally meticulous way. With multiple URL redirects employed by the attackers, concealing the real hosting URLs, and so aiming to bypass email protection systems, the cybercriminals will eventually drive the user to the cloned Microsoft Office 365 login page.

Read more of this story at Slashdot.

07:34

Emulating 'Trolls', More Movies Try Bypassing Cinemas For On-Demand Releases [Slashdot]

Trolls World Tour won't be the last major-studio release to bypass movie theatres altogether. An anonymous reader quotes the Guardian: Universal gets a greater cut of revenue from digital services than at the box office, which means the film has made the same amount of profit in its first three weeks as the first Trolls film did during its entire five-month run in U.S. cinemas.... "Universal has cast the first stone," said Jeff Bock, an analyst at research firm Exhibitor Relations. "This is exactly what the theatrical exhibition world had always feared -- proof that bypassing theatres could be a viable model of distribution for studios. "Like it or not, the floodgates have opened. This is just the beginning, and the longer it takes for theatres to open on a worldwide scale, we're going to see the premium-video-on-demand schedule become more and more populated." That schedule is now filling up. Universal announced last week that Judd Apatow's new comedy The King of Staten Island would scrap its planned cinema release on 19 June and premiere on-demand instead. And Warner Bros is doing the same with Scoob!, the first full-length animated Scooby-Doo film, which was meant to hit cinemas on 15 May... The straight-to-digital strategy is only considered to be viable for mid- and lower-budget films forecast to earn at most a few hundred million at the global box office.

Read more of this story at Slashdot.

05:34

Why The Navy's UFO Videos Aren't Showing Aliens [Slashdot]

Syfy Wire's "Bad Astronomy" column is written by astronomer Phil Plait, head science writer of Bill Nye Saves the World. This week he looked at the recently-declassified videos taken by the U.S. Navy's fighter jets showing unidentified flying objects "moving in weird and unexpected ways." ("The 'aura' around the object in some of the footage could simply be the camera overexposing around a bright object; infrared cameras can do that, creating an odd glow.") But to prove they're not aliens, Plait ultimately cites an analysis on the site MetaBunk, "run by former video game programmer and critical thinker Mick West" -- and his videos summarizing discussions on the site's bulletin board: [I]n this one he argues, convincingly to me, that the FLIR video just shows a passenger plane seen from a distance. He also shows that the rotation of the object in the GIMBAL video is almost certainly due to the motion of the camera itself as it tracks the objects. The fighter jet is turning, and at the same time the camera is mounted on a rotating mechanism that allows it to track. These two motions combine to make a somewhat confusing series of rotations in the image, which is why the object in the video appears to rotate around. But my favorite bit is a video where he gives a single, simple explanation that accounts for two things seen in the Navy videos, specifically, why the object in the GOFAST video appears to scream across the water so rapidly, and how in the GIMBAL video the object seems to travel against a strong wind. The answer: It's an illusion due to parallax, how an object close to you seems to move more rapidly against a more distant background as the camera moves... Given the distance, angle, and motion, it's likely that the GOFAST video shows a balloon. As a kind of consolation prize, the column concludes by sharing the cheesy opening credits to a 1970 precursor to the TV show Space: 1999 -- called UFO.

Read more of this story at Slashdot.

05:00

This man assembled his own covid antibody tests for himself and his friends [MIT Technology Review]

In Portland, Oregon, earlier this spring, a programmer named Ian Hilgart-Martiszus pulled out a needle and inserted it into the arm of social worker Alicia Rowe as she squinted and looked away. He was testing for antibodies to the coronavirus. He’d gathered 40 friends and friends of friends, and six homeless men too.

As a former lab technician, Hilgart-Martiszus knew how to do it. Despite extensive debate over the accuracy of blood tests for coronavirus antibodies and how they should be used, by March anyone with a credit card and some savvy could order “research only” supplies online and begin testing.

“I am just doing it at home. This is total citizen science,” he says. Pretty much everyone who has had the sniffles or a fever in the last few months wants to know if was really covid-19.

drawing blood for test at home
Ian Hilgart-Martiszus draws blood from a volunteer to test for covid-19 antibodies.
MICHAEL MCNAMARA

On April 6, Hilgart-Martiszus posted results of what he dubbed the nation’s “first community serum testing” survey for covid-19, complete with figures and a description of how he did it. He’d beaten out big medical centers by weeks. His data indicated one positive case and three suspected ones.

The DIY effort put him, for a few days, into the forefront of the search for antibodies, blood proteins which form in response to covid-19 and are a telltale indication you’ve been infected. There are now dozens of surveys under way by blood banks and hospitals, and Quest Diagnostics has an online portal where people can try to make an appointment for a blood draw. A physician still has to approve the order.

Just one month ago, though, this type of information was hard to come by. Hilgart-Martiszus, annoyed by criticism of President Trump’s coronavirus response by what he calls the media “echo chamber,” figured that he would try to “fill the void” with actual data. He adds that he is skeptical of big government and “political officials comfortably receiving a salary and advocating to keep the economy closed.”

Hilgart-Martiszus, whose day job is in real-estate planning for a sporting goods chain, first built a computer dashboard in March to predict hospitalizations in Oregon. He emailed a copy to his boss, who told him the company didn’t want to be involved.

By then, though, Hilgart-Martiszus was developing bigger plans. By March, scientific supply companies had begun advertising kits to probe human blood serum for antibodies to the distinctive “spike” protein on the virus. He paid $550 each to get some from the Chinese supplier GenScript.

Most research is carried out by universities or companies under a firm framework of rules. Two weeks after Hilgart-Martiszus posted his results, for instance, his old employer, Providence Health Care services, announced its own much larger serum study, drawing blood from 1,000 people in one day, according to news reports. While Hilgart-Martiszus’s study didn’t have the bells and whistles, or any kind of approval, he couldn’t resist reminding them who was first: “Looks like my old research institute will publish the second antibody study in Oregon. Can’t wait to see how their results compare.”

In Oregon drawing someone else’s blood is legal for anyone who knows how, says Charles “Derris” Hurley, a former pharmacist who says he fronted Hilgart-Martiszus $2,000 to purchase testing supplies. “I said, ‘Let’s go ahead and try this—if we learn something we learn something, and if we don’t we don’t,’” he says. “We are of the attitude that everyone should be tested.”

To take part in the project, Hurley drew blood from his wife, Jan Spitsbergen, a PhD microbiologist who tends zebrafish at Oregon State University, and she drew his. “She was a lot better at it,” he says.

Hilgart-Martiszus used the most accurate kind of antibody test, called an ELISA, which requires some equipment and know-how. He put the blood from his volunteers into special tubes, letting it clot for about 45 minutes. Next he spun it in a centrifuge for 10 minutes and used a pipette to suction off the serum, a clear liquid where the antibodies would be. Then he added dilution buffer and let it incubate with the chemicals he’d bought online on a plastic plate with 96 wells. The liquid would change color if antibodies were present.

To measure the readout from the wells, he needed a machine to scan the plate, which he managed to borrow from a nearby university. This particular test looks for IGG antibodies, a type that would be expected to appear about two weeks after infection.

In 40 tests, it was Hurley whose blood showed the strongest signal for antibodies to the virus—many times higher than anyone else’s. “If you look at Ian’s printout, I am the one that stands out like a sore thumb,” says Hurley.

It was the potential explanation for a mystery ailment Hurley suffered in mid-December. He’d come down with an unusual cold. He felt fatigued and had red eyes. Then his wife got sick in January and stayed in bed for two weeks. Plus, they’d had a Chinese exchange student living with them at the time. “We started talking more and more—‘We need to have some kind of test, something is wrong,’” he recalls.

Hurley believes he had covid-19, but if he did, that would mean the illness was circulating in the US a month earlier than is widely known (the first official American case was recorded in January near Seattle). As of May 2, the Oregon Health Authority says, there have been 2,579 cases and 104 deaths in the state, making it among those least affected.

Hurley says his positive result is not enough for him to resume his normal routine. “I follow social distancing,” he says. “I guess I want to have more verification and have some idea how long immunity lasts.”

Hilgart-Martiszus asked everyone to tell him if they’d been sick. That included Rowe, the social worker from Portland. “I had a cold in February, and I really hoped that I had gotten it out of the way, but no such luck.” She came up negative.

Demand for antibody tests remains high. After Hilgart-Martiszus posted his results to the web, “he was inundated with requests from all over the world,” says Spitsbergen. A hospital wanting to test its medical staff reached out to him. So did a fire department wanting to test 100 people.

With all the new attention, Hilgart-Martiszus says he’s trying to play by the rules and is not collecting any more blood at the moment. He’s instead working with Oregon State University to create a larger, more formalized study, with approval from an ethics board. He launched a crowdfunding campaign and a website where he’s developing plans to let anyone send in blood for testing.

“I told the first group, don’t take this as a clinical diagnosis—it’s not. It’s research,” he says. “I just pushed it out there.” Now he’s telling people he can’t test them right away, at least until he gets his paperwork in order. “It sucks to wait to help people,” he says, “but with all of the regulations, it’s too risky to test strangers.”

04:56

Linux Device Mapper Adding An "Emulated Block Size" Target [Phoronix]

A new target for Linux's Device Mapper is EBS, the Emulated Block Size...

04:28

Linux 5.8 Will Finally Be Able To Control ThinkPad Laptops With Dual Fans [Phoronix]

Long overdue but for Lenovo ThinkPad laptops sporting two fans, the Linux 5.8 kernel will see the ability to control both fans...

01:34

How Kickstarter's New Union Negotiated Terms For Pandemic-Related Layoffs [Slashdot]

"The COVID crisis has led to a 35% drop in live projects" says Kickstarter communications officer David Gallagher -- who points out that fees on those projects are the company's sole source of income. This led Kickstarter's CEO to announce "sweeping layoffs of up to 45 percent of employees," the union of Kickstarter employees tells Gizmodo. (Though Gallagher says the final numbers will first include some voluntary buyouts, followed by a re-assessment to "better understand the scale of any layoffs that may be required.") But Kickstarter is also the first major tech company to unionize. So what happened next? An anonymous reader shares this report from the two-months-old Kickstarter United (KSRU) union: The bargaining unit was faced with the prospect of involuntary layoffs with two to three weeks of severance per year of employment in the midst of a global pandemic... After two weeks of bargaining, we negotiated a severance package that we are incredibly proud of, which has been unanimously ratified by KSRU. The package prioritizes extended severance payments and health insurance coverage, and we were inspired to see dozens of our highest-paid colleagues volunteer to take layoffs in order to save jobs and increase payouts for lower-paid bargaining unit members. We also negotiated additional terms that are previously unheard-of in tech severance agreements, fulfilling another of our longstanding goals: moving our industry forward and demonstrating the necessity of organizing in tech. The terms we won for our 86-member bargaining unit include: - Four months of severance pay for all laid-off employees, both voluntary and involuntary. - Continuing healthcare coverage increased by salary: four months for our higher-paid colleagues, and six months for those who make less than the bargaining unit's median salary. - Recall rights for a full year, so that if an eliminated position becomes open again in the future, qualified laid-off workers will have priority consideration in filling it. - A release from the non-compete and a modification of the non-solicitation clauses included in our original hiring agreements — an allowance unprecedented in tech that will enable our members to pursue new avenues of employment unfettered... This experience has shown us how crucial it is for tech workers to unite, to leverage our collective strength, and to focus on lifting each other up and protecting one another. Kickstarter United is committed to standing alongside workers everywhere, helping to bring our collective visions for a fairer, more just world to life.

Read more of this story at Slashdot.

Saturday, 02 May

23:34

Pandemic Shows Why<nobr> <wbr></nobr>.Org Domains Are Important [Slashdot]

The Los Angeles Times published an op-ed by the executive director of Access Now, a global organization that works to protect privacy, free expression, digital security and human rights among internet users. Now that the sale of the .org registry has been blocked, he explains why that matters. As the pandemic has shown, it has been left to civil society organizations, and individual volunteers, to step up and fill the gaps left by governments and corporations. Large organizations such as Doctors Without Borders, the International Red Cross and the United Nations provide direct, immediate support to hospitals and healthcare professionals. Neighborhood and grass-roots organizations have distributed meals and provided accommodation and friendship to the sick and vulnerable. These organizations range in size, mission, effectiveness and reach, but have two elements in common: They're working toward the betterment of society, and their websites end in dot-org... From downloading government health guidelines to online learning to connecting with isolated friends and family, the internet has become a lifeline. It has become the town square, the hospital and the schoolyard all at once. Now was clearly the time to protect it, not sell it off to private equity.... Private companies cannot be trusted to not "increase the rent" on small organizations. Private companies do not spend $1.1 billion on an internet domain unless there is profit to be made... What happens next isn't clear. If the Internet Society no longer wants to control the dot-org domain, an alternative will need to be found... To find this special home, we'll need an open process, innovative ideas and committed partners — all of which we've built over the last few, wild months.

Read more of this story at Slashdot.

21:34

.Org Registry No Longer Being Sold -- But What Should Happen Next? [Slashdot]

One of the advisors to the #SaveDotOrg campaign was Jacob Malthouse, co-founder of the.eco top-level domain (and also a former ICANN vice president). "Here's what needs to happen next," he writes in an essay on Medium: As of today, the #savedotorg campaign has nearly 27,000 supporters and 2,000 nonprofits behind it. It dwarfs any campaign Internet governance has ever seen. There's no way to de-legitimize such an outpouring of concern... ISOC and PIR leadership must recognize and apologize for the harm and uncertainty that they have caused both nonprofits and Internet governance. There never should have needed to be a #savedotorg campaign, because dot-org should never have been put at risk. Second, The ISOC board should invite the leadership of the organizations that led the #SaveDotOrg campaign to an open dialogue to understand their concerns and priorities for the future of dot-org. This dialogue should recognize that it may be agreed that ISOC and PIR may no longer be the appropriate stewards for dot-org... [A]ll parties should agree to work together with ICANN to chart a course of action that builds confidence and faith in the multi-stakeholder model of Internet governance. While there are many challenges with this model, one being how messy it seems, in the end the right decisions were taken. We must all come together to defend the model that has built and will continue to sustain a single global Internet... Now is the time to think about how we can move forward together.

Read more of this story at Slashdot.

19:34

Are Job Interviews Broken? [Slashdot]

"Job interviews are broken," according to a recent New York Times piece by an organizational psychologist at Wharton who argues that his profession has "over a century of evidence on why job interviews fail and how to fix them..." The first mistake is asking the wrong kinds of questions. Some questions are just too easy to fake. What's your greatest weakness? Even Michael Scott, the inept manager in the TV show "The Office," aced that one: "I work too hard. I care too much...." Brainteasers turn out to be useless for predicting job performance, but useful for identifying sadistic managers, who seem to enjoy stumping people. We're better off asking behavioral questions. Tell me about a time when... Past behavior can help us anticipate future behavior. But sometimes they're easy to game, especially for candidates with more experience... The second error is focusing on the wrong criteria. At banks and law firms, managers often favor people who went to the same school or share their love of lacrosse... A third problem: Job interviews favor the candidates who are the best talkers... My favorite antidote to faking is to focus less on what candidates say, and more on what they do. Invite them to showcase their skills by collecting a work sample -- a real piece of work that they produced... Credentials are overrated, and motivation is underrated. It doesn't matter how much experience people have if they lack the drive to think creatively, work collaboratively and keep on learning. The article's subheading argues "Instead of focusing on credentials, let's give candidates the chance to showcase their will and skill to learn." Any Slashdot readers want to share their own experiences? And are job interviews broken?

Read more of this story at Slashdot.

17:34

President Trump Just De-Funded a Research Nonprofit Studying Virus Transmissions [Slashdot]

Charlotte Web writes: The U.S.-based research non-profit Ecohealth Alliance has spent 20 years investigating the origins of infectious diseases like Covid-19 in over 25 countries, "to do scientific research critical to preventing pandemics." America just cut it's funding. Trump's reason? "Unfounded rumors" and "conspiracy theories...without evidence," according to reports in Politico and Business Insider. The group had received a total of $3.7 million through 2019 (starting in 2014), publishing over 20 scientific papers since 2015 on how coronaviruses spread through bats, including at least one paper involving a lab in China. But during a White House press briefing, a conservative web site incorrectly stated the whole $3.7 million had gone to that single lab, while even more erroneously implying that that lab was somehow the source of the coronavirus. They'd then asked "Why would the U.S. give a grant like that to China?" and President Trump vowed he would revoke the (U.S.-based) nonprofit research group's grant, which he did 10 days later. Slashdot referenced that research nonprofit just this Sunday, citing a recent interview with the group's president who'd said they'd found nearly 3% of the population in China's rural farming regions near wild animals already had antibodies to coronaviruses similar to SARS. "We're finding 1 to 7 million people exposed to these viruses every year in Southeast Asia; that's the pathway. It's just so obvious to all of us working in the field." Yet Thursday Politico reported the Trump administration "has been pressuring analysts, particularly at the CIA, to search for evidence that the virus came from a lab and that the World Health Organization helped China cover it up," citing a person briefed on those discussions. People briefed on the intelligence also told them there is currently no evidence to support that theory. Michael Morell, the former acting director and deputy director of America's CIA, also pointed out Thursday that the lab in question was in fact partially funded by the United States. "So if it did escape, we're all in this together."

Read more of this story at Slashdot.

16:34

Copyleft and the Cloud: Where Do We Go From Here? [Slashdot]

Free software evangelist Jeremy Allison - Sam (Slashdot reader #8,157) is a co-creator on the Samba project, a re-implementation of SMB/CIFS networking protocol, and he also works in Google's Open Source Programs Office. Now he shares his presentation at the Software Freedom Conservancy's "International Copyleft Conference." He writes: The Samba project has traditionally been one of the strongest proponents of Copyleft licensing and Free Software. However, in the Corporate Cloud-first world we find ourselves, traditional enforcement mechanisms have not been effective. How do we achieve the goals of the Free Software movement in this new world and how do we need to change what we're doing to be successful ? Traditional license enforcement doesn't seem to work well in the Cloud and for the modern software environment we find ourselves. In order to achieve the world of Free Software available for all I think we need to change our approach. Both GPLv3 and the AGPL have been rejected soundly by most developers. I would argue that we need a new way to inspire developers to adopt Free Software goals and principles, as depending on licensing has failed as licensing itself has fractured. Communication and collaboration are key to this. Stand-alone software is essentially useless. Software interoperability and published protocol and communication definitions are essential to build a freedom valuing software industry for the future The talk's title? "Copyleft and the Cloud: Where do we go from here?"

Read more of this story at Slashdot.

15:34

Arm Offers Free Access To Its Chip Designs To Early-Stage Startups [Slashdot]

An anonymous reader quotes Techcrunch: Arm — the U.K. company behind the designs of chips for everyone from Apple to Qualcomm to Samsung — is hoping to kickstart developing by offering up access to around 75% of its chip portfolio for free to qualified startups. The move marks an expansion of the company's Flexible Access program. With it, Arm will open access to its IP for early-stage startups. While some of the biggest companies pay the chip designer big bucks for that information, the cost can be prohibitive for those just starting out... Interested parties can access the full list of available IP here.

Read more of this story at Slashdot.

14:34

Several Pharmaceutical Companies Are Racing To Develop a Coronavirus Vaccine [Slashdot]

"The race for a vaccine to combat the new coronavirus is moving faster than researchers and drugmakers expected," reported Dow Jones News Services this week, "with Pfizer Inc. joining several other groups saying that they had accelerated the timetable for testing and that a vaccine could be ready for emergency use in the fall." Pfizer said Tuesday it will begin testing of its experimental vaccine in the U.S. as early as next week. On Monday, Oxford University researchers said their vaccine candidate could be available for emergency use as early as September if it passes muster in studies, while biotech Moderna Inc. said it was preparing to enter its vaccine into the second phase of human testing... If the vaccine shows signs of working safely in the study, Moderna said the third and final phase of testing could start in the fall. The company said it could seek FDA approval to sell the vaccine by year's end, if it succeeds in testing... Merck & Co., a longtime maker of vaccines, said it is talking to potential partners about three different technologies to manufacture coronavirus vaccines... Johnson & Johnson said earlier this month it shaved months off the usual timelines for developing a vaccine, and expects to start human testing of a coronavirus candidate as soon as September, with possible availability on an emergency-use basis in early 2021. SFGate also reports that GlaxoSmithKline and the French pharmaceutical company Sanofi "expect their vaccine will be ready for human testing in the second half of 2020." And the Associated Press notes that America's Food and Drug Administration (FDA) "is tracking at least 86 active different approaches among pharmaceutical companies, academic researchers and scientists around the globe." Dr. Peter Marks, director of the agency's Center for Biologics Evaluation and Research, adds "We expect about two dozen more to enter clinical trials by this summer and early fall."

Read more of this story at Slashdot.

13:34

Aggregate Data From Connected Scales Shows Minimal Weight Gains During Lockdowns [Slashdot]

"Data from connected scale users suggests Americans, on average, are not gaining weight during lockdowns," writes long-time Slashdot reader pfhlick. The Washington Post reports: Withings, the maker of popular Internet-connected scales and other body-measurement devices, studied what happened to the weight of some 450,000 of its American users between March 22 — when New York ordered people home — and April 18. Despite concerns about gaining a "quarantine 15," the average user gained 0.21 pounds during that month... Over the same March-April period in 2019, Withings said its American users gained slightly less weight — 0.19 pounds on average — though fewer people had the scales last year... Dariush Mozaffarian, dean of the School of Nutrition Science and Policy at Tufts University — who wasn't involved with the Withings analysis — said he found the results a bit disappointing. "With the shutdown of the restaurants, I thought the numbers would have gotten better," he said. Home-cooked meals tend to be healthier than dining out. Withings' numbers varied slightly for other countires. But citing a professor of medicine at Stanford, the article notes that average weight gains may be misleading, since some people "may be hitting their groove during stay-at-home orders by embracing cooking and taking up jogging. But others could be using food to cope with stress and gaining large amounts of weight." In fact, 37% of the scale owners gained more than a pound. (Which, if my math is correct, suggests that the other 63% had to lose at least .13 pounds.) The article also notes that for buyers of Withings' scales, "contributing aggregate data is a condition included in its terms of service; its customers don't get the option to opt out if they want to use Withings products."

Read more of this story at Slashdot.

12:34

Will Systemd 245 Bring Major Changes to Linux's Home Directory Management? [Slashdot]

Camel Pilot (Slashdot reader #78,781) writes: Leannart Poettering is proposing homed to alter the way Linux systems handle user management. All user information will be placed in a cryptographically signed JSON record, such as username, group membership, and password hashes. The venerable /etc/passwd and /etc/shadow will be a thing of the past. One of the claimed advantages will be home directory portability. "Because the /home directory will no longer depend on the trifecta of systemd, /etc/passwd, and /etc/shadow, users and admins will then be able to easily migrate directories within /home," writes Jack Wallen at TechRepublic. "Imagine being able to move your /home/USER (where USER is your username) directory to a portable flash drive and use it on any system that works with systemd-homed. You could easily transport your /home/USER directory between home and work, or between systems within your company." What is not clear is that for portability, systems would have to have identical user_id, group names, group_id, etc. And what mechanism is going to provide user authorization to login to a system? "At the moment, systemd 245 is still in RC2 status," the article notes, adding "The good news, however, is that systemd 245 should be released sometime this year (2020). "When that happens, prepare to change the way you manage users and their home directories."

Read more of this story at Slashdot.

12:00

Valve Updates Steam Survey Data For April With A Slight Linux Increase [Phoronix]

Valve has published their Steam Survey results for April, which is the first full month where the US and still much of the world has been in lockdown over the coronavirus, and thus interesting to see how it has impacted the gamer metrics...

11:34

Bill Gates Complains America's Coronavirus Testing Data is 'Bogus' [Slashdot]

Appearing on CNN Thursday, Bill Gates called America's coronavirus testing data "bogus," in part because "the United States does not make sure you get results in 24 hours." Business Insider reports: Testing in the U.S. remains a long and complicated task, and it can take several days before people are told whether they have tested positive or negative for COVID-19. "If you get your test results within 24 hours so you can act on it, then let's count it," Gates said, adding that people were most infectious within the first three to four days after infection and might continue to interact with others and spread the virus until they have definitive results. "What's the point of the test?" he said. "That's your period of greatest infectiousness." Gates added that residents of low-income neighborhoods had lesser access to testing facilities and were not prioritized, despite indications that the virus has taken a disproportionate toll on marginalized communities. "Our system fails to have the prioritization that would give us an accurate picture of what's going on," he said. While America is now testing about 200,000 people a day, the article cites experts from Harvard University who believe 20 million tests a day are what's needed to fully "remobilize the economy."

Read more of this story at Slashdot.

10:41

Judge Orders FCC to Hand Over IP Addresses Linked to Fake Net Neutrality Comments [Slashdot]

Before it rolled back net neutrality protections in 2017, America's Federal Communications Commission requested public comments online. But they're still facing criticism over how they handled them, Gizmodo reports: A Manhattan federal judge has ruled the Federal Communications Commission must provide two reporters access to server logs that may provide new insight into the allegations of fraud stemming from agency's 2017 net neutrality rollback.... The logs will show, among other details, the originating IP addresses behind the millions of public comments sent to the agency ahead of the December 2017 net neutrality vote. The FCC attempted to quash the paper's request but failed to persuade District Judge Lorna Schofield, who wrote that, despite the privacy concerns raised by the agency, releasing the logs may help clarify whether fraudulent activity interfered with the comment period, as well as whether the agency's decision-making process is "vulnerable to corruption... In this case, the public interest in disclosure is great because the importance of the comment process to agency rulemaking is great," she said, adding: "If genuine public comment is drowned out by a fraudulent facsimile, then the notice-and-comment process has failed."

Read more of this story at Slashdot.

10:20

Enlightenment 0.24 Alpha Released For This X11 Window Manager / Wayland Compositor [Phoronix]

The first alpha release of the Enlightenment 0.24 window manager / Wayland compositor with new features and other improvements...

10:08

Saturday Morning Breakfast Cereal - Lean [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I don't know why this has failed to persist as a military tactic.


Today's News:

10:01

ReactOS Upgrades Its Build Environment - Shifting To A Much Newer GCC Compiler [Phoronix]

The "open-source Windows" ReactOS project has upgraded its build environment leading to much newer versions of key compiler toolchain components...

09:50

Intel's OpenCL Intercept Layer Sees First Release In Two Years [Phoronix]

Intel's OpenCL Intercept Layer remains focused on debugging and analyzing OpenCL application performance across platforms. It hadn't seen a new release, however, in two years but that changed last month...

09:34

How We Can Save the Comic Book Industry [Slashdot]

destinyland writes: For the first time in many years, the first Saturday in May won't mark Free Comic Book Day, as the worldwide comic celebration at comic-book stores has been postponed amid coronavirus concerns," reports Oklahoma's largest newspaper — saying it's been postponed to an unspecified new date in the future. But they're suggesting fans can support their local shops anyways, with some still offering limited services, while others "may still be closed but offer gift cards or other online shopping options." I think those of us who have money should observe "Not-Free Comic Book Day" — where we seek out a local comic book retailer, and ask them to mail us a bunch of comic books and graphic novels. (It also means more money going to the postal service.) Or maybe order some comic books to be sent to a younger reader who's sheltering at home. The Associated Press reports that the pandemic "poses a particular threat to comic book shops, a pop-culture institution that has, through pluck and passion, held out through digital upheaval while remaining stubbornly resistant to corporate ownership..." They write that the whole industry "is at a standstill that some believe jeopardizes its future, casting doubt on how many shops will make it through and what might befall the gathering places of proud nerds, geeks and readers everywhere." But it also quotes Joe Field, the owner of Flying Color Comics in Concord, California, who came up with Free Comic Book Day. "Comic book retailers are the cockroaches of pop culture.We have been through all kinds of things that were meant to put us out of business, whether it's the new digital world or distribution upheaval or Disney buying Marvel. We have adapted and pivoted and remade our businesses in ways that are unique and survivable." Individual shops seem to be announcing their own individual celebrations using the #FCBD tag on Twitter. And at least one publisher is using the occasion to stream an alternative event online, reports CBR. "Alt FCD, taking place over the course of May 1 and 2, will feature virtual panels with comic book creators and free digital downloads of books. The event will be streamed on Facebook, Twitch and YouTube."

Read more of this story at Slashdot.

08:34

Tesla's Stock Drops Billions After Elon Musk's Tweetstorm Friday [Slashdot]

Friday Techcrunch reported Elon Musk tweeted to his 33.4 million followers that Tesla's stock price "was 'too high' in his opinion, immediately sending shares into a free fall and in possible violation of an agreement reached with the U.S. Securities and Exchange Commission last year." Tesla's shares plummetted nearly 12% over the next 30 minutes, which reduced Tesla's valuation by over $14 billion, the BBC reports, while reducing Musk's own stake by $3 billion, according to an article shared by long-time Slashdot reader UnresolvedExternal. "In other tweets, he said his girlfriend was mad at him, while another simply read: 'Rage, rage against the dying of the light of consciousness.'" Even at the end of the day Tesla's shares were still down 7.17%. But Techcrunch called Musk's stock-price tweet "just one of many sent out in rapid fire that covered everything from demands to 'give people back their freedom' and lines from the U.S. National Anthem to quotes from poet Dylan Thomas and a claim that he will sell all of his possessions." Rolling Stone has more on what they're calling Musk's "quarantine tantrum," noting that in a Wednesday earnings call, Musk had also complained about restrictions on non-essential businesses and ordinary people. "To say that they cannot leave their house, and they will be arrested if they do, this is fascist... give people back their goddamn freedom." The magazine notes this drew a scathing rebuttal on nationwide TV from The Daily Show's Trevor Noah: "Finally, someone has decided to call out this fascist American government that's asking people to please stay in their houses to try and save their own lives," Noah said sarcastically. "I mean, you're not even allowed to go to the grocery store anymore! Well, actually, you can go to the grocery store, but you can't even go for a walk! I mean, you can do that too, but what about the beach? You're not allowed to go to the beach, except for all the states where you're allowed to go to the beach. But you definitely can't go to H&M, and that is the definition of fascism." CNN Business writes that Musk, "heralded for years as a pioneer in space travel and transportation, has recently veered into disseminating coronavirus misinformation," adding that Musk's comments "also come in stark contrast to those made by some of his peers in Silicon Valley, who have urged caution on reopening." "I worry that reopening certain places too quickly, before infection rates have been reduced to very minimal levels, will almost guarantee future outbreaks and worsen longer-term health and economic outcomes," Facebook CEO Mark Zuckerberg said during an earnings call Wednesday.

Read more of this story at Slashdot.

07:42

KDE Starts May With Dolphin Improvements, Various Bug Fixes [Phoronix]

KDE developers remain as busy as ever during the global lockdown around the coronavirus...

07:00

Google Wants Australia To Remove Civil Penalties From CLOUD Act-Readying Bill [Slashdot]

An anonymous reader quotes a report from ZDNet: Google has raised a handful of concerns with Australia's pending Telecommunications Legislation Amendment (International Production Orders) Bill 2020 (IPO Bill), including the Commonwealth's choice of phrasing, the avenues proposed for record-sharing, and the Bill being at odds with the purpose of the United States' Clarifying Lawful Overseas Use of Data Act (CLOUD Act). [...] In a submission [PDF] to the Parliamentary Joint Committee on Intelligence and Security (PJCIS) and its review of the IPO Bill, Google said while it encourages and supports efforts by the Australian government to negotiate an executive agreement, it said there are certain elements of the Bill that give it cause for concern. "Especially when considering how the interception powers under this Bill could be used in tandem with technical capability notices under the controversial Telecommunications and Other Legislation (Assistance and Access) Act," it wrote. Making a recommendation to the PJCIS, Google said the Bill should not apply to service providers in their capacity as infrastructure providers to corporations or government entities, saying corporations or government entities are best placed to produce the requested records themselves. Under the Bill, designated communications providers are instructed to provide any requested communications and data to the requesting agency or the Australian Designated Authority. Google would prefer the authority to be a two-way channel. Google also poked holes in the Bill's enforcement threshold. Civil penalties for non-compliance with an IPO establishes a framework for compliance. If a designated communications provider receives a valid IPO and the designated communications provider meets the "enforcement threshold" when the IPO is issued, the designated communications provider must comply with the IPO. Google labelled the two-step test that is the threshold, a "relatively low bar to meet." "Failure to comply with an IPO may lead to a civil penalty of up to AU$10 million for body corporates. The imposition of a mandatory obligation to comply with an IPO is contrary to the purpose of the CLOUD Act which is to lift blocking statutes, but explicitly does not create a compulsory obligation on service providers," it said. Specifically, the search giant said it was concerned by the attempt to impose a mandatory obligation on overseas-based designated communications providers that exists "only in the construct of an otherwise non-compulsory international agreement." Google is seeking further information about the role that eligible judges will play in approving IPOs that involve the interception of communications. It also wants the appeal options contained within the Bill to be strengthened.

Read more of this story at Slashdot.

05:09

Linux 5.8 Seeing Support For New Marvell/Aquantia Atlantic "A2" NICs [Phoronix]

Linux 5.8 will see support for next-generation Marvell/Aquantia network chipsets...

04:23

Open-Source OpenXR Runtime Monado Seeing Better Performance, New Functionality [Phoronix]

Monado, the open-source OpenXR run-time implementation for Linux, has been advancing quite well since we last reported on it back in February with its inaugural v0.1 release...

04:00

NASA Names Companies To Develop Human Landers For Artemis Moon Missions [Slashdot]

New submitter penandpaper shares an excerpt from a NASA press release: NASA has selected three U.S. companies to design and develop human landing systems (HLS) for the agency's Artemis program, one of which will land the first woman and next man on the surface of the Moon by 2024. NASA is on track for sustainable human exploration of the Moon for the first time in history. The human landing system awards under the Next Space Technologies for Exploration Partnerships (NextSTEP-2) Appendix H Broad Agency Announcement (BAA) are firm-fixed price, milestone-based contracts. The total combined value for all awarded contracts is $967 million for the 10-month base period. The following companies were selected to design and build human landing systems: - Blue Origin of Kent, Washington, is developing the Integrated Lander Vehicle (ILV) -- a three-stage lander to be launched on its own New Glenn Rocket System and ULA Vulcan launch system. - Dynetics (a Leidos company) of Huntsville, Alabama, is developing the Dynetics Human Landing System (DHLS) -- a single structure providing the ascent and descent capabilities that will launch on the ULA Vulcan launch system. - SpaceX of Hawthorne, California, is developing the Starship -- a fully integrated lander that will use the SpaceX Super Heavy rocket. "With these contract awards, America is moving forward with the final step needed to land astronauts on the Moon by 2024, including the incredible moment when we will see the first woman set foot on the lunar surface," said NASA Administrator Jim Bridenstine. "This is the first time since the Apollo era that NASA has direct funding for a human landing system, and now we have companies on contract to do the work for the Artemis program." Further reading: SpaceX and NASA Break Down What Their Historic First Astronaut Mission Will Look Like

Read more of this story at Slashdot.

03:03

As Brit cyber-spies drop 'whitelist' and 'blacklist', tech boss says: If you’re thinking about getting in touch saying this is political correctness gone mad, don’t bother [The Register]

Whitehat and blackhat next?

The British government's computer security gurus have announced they will stop using the terms whitelisting and blacklisting in their online documentation.…

01:00

Robert May, Former UK Chief Scientist and Chaos Theory Pioneer, Dies Aged 84 [Slashdot]

Pioneering Australian scientist Robert May, whose work in biology led to the development of chaos theory, has died at age 84. The Guardian reports: Known as one of Australia's most accomplished scientists, he served as the chief scientific adviser to the United Kingdom, was president of the Royal Society, and was made a lord in 2001. Born in Sydney on January 8, 1938, May's work was influential in biology, zoology, epidemiology, physics and public policy. More recently, he applied scientific principles to economics and modeled the cause of the 2008 global financial crisis. On Wednesday, his friends and colleagues paid tribute to a man who they said was a gifted polymath and a "true giant" among scientists. Dr Benjamin Pope, an Australian astrophysicist and student at Oxford from 2013 to 2017, said May was a role model, and meeting him was a highlight of his university career. "I became aware of his achievements almost as soon as I learnt anything about physics in university," Pope told Guardian Australia. "My first contact with computer programming was at the University of Sydney, in first year physics, where the example is to recreate Robert May's experiment with the bifurcation diagram and the logistic map. "His bifurcation diagram is one of the iconic diagrams in physics," he said. "[And] he made what was between three or four independent discoveries that lead to chaos theory. You might have heard of the butterfly effect ... May's is probably the other foundational, computational model of chaos."

Read more of this story at Slashdot.

Friday, 01 May

22:00

The Godot Game Engine's Vulkan Support Is Getting In Increasingly Great Shape [Phoronix]

The open-source Godot Game Engine lead developer Juan Linietsky has published a new Vulkan progress report, the first in three months, and as such there are a lot of changes...

21:30

'Hydrogen-On-Tap' Device Turns Trucks Into Fuel-Efficient Vehicles [Slashdot]

An anonymous reader quotes a report from IEEE Spectrum: The city of Carmel, Ind., has trucks for plowing snow, salting streets, and carrying landscaping equipment. But one cherry-red pickup can do something no other vehicle can: produce its own hydrogen. A 45-kilogram metal box sits in the bed of the work truck. When a driver starts the engine, the device automatically begins concocting the colorless, odorless gas, which feeds into the engine's intake manifold. This prevents the truck from guzzling gasoline until the hydrogen supply runs out. The pickup has no fuel cell module, a standard component in most hydrogen vehicles. No high-pressure storage tanks or refueling pumps are needed, either. Instead, the "hydrogen-on-tap" device contains six stainless steel canisters. Each contains a 113-gram button of an aluminum and gallium alloy. A small amount of water drips onto the buttons, causing a chemical reaction that splits the oxygen and hydrogen contained in the water. The hydrogen releases, and the rest turns into aluminum oxide, a waste product that can be recycled to create more buttons. Back in the garage, the driver can replace spent canisters with news ones to replenish the hydrogen supply. AlGalCo -- short for Aluminum Gallium Company -- has spent 14 years refining the technology, which is based on a process developed by distinguished engineer Jerry Woodall. In 2013, AlGalCo partnered with the Carmel Street Department to build a prototype for one of the city's Ford F-250 trucks. In tests, the red pickup has seen a 15 percent improvement in gas mileage and a 20 percent drop in carbon dioxide emissions.

Read more of this story at Slashdot.

19:50

Dogs Are Now Being Trained To Sniff Out Coronavirus [Slashdot]

New Slashdot submitter Joe2020 shares a report from the BBC: Firefighters in Corsica, France, are aiming to teach canines how to sniff out coronavirus, as they can other conditions. It's hoped that detection dogs could be used to identify people with the virus at public places like airports. Their trial is one of several experiments being undertaken in countries including the UK and the USA. "Each individual dog can screen up to 250 people per hour," James Logan, head of the London School of Hygiene & Tropical Medicine, told The Washington Post. "We are simultaneously working on a model to scale it up so it can be deployed in other countries at ports of entry, including airports." The dogs are trained using urine and saliva samples collected from patients who tested positive and negative for the disease. "We don't know that this will be the odor of the virus, per se, or the response to the virus, or a combination," Cynthia Otto, director of the Working Dog Center at Penn's School of Veterinary Medicine, told the publication. "The dogs don't care what the odor is ... What they learn is that there's something different about this sample than there is about that sample."

Read more of this story at Slashdot.

19:10

Steam Ends Mac Support For SteamVR [Slashdot]

Steam will no longer support SteamVR on macOS. The Verge reports: Steam introduced SteamVR for Apple computers way back in the mists of time -- 2017's Worldwide Developers Conference. As The Verge wrote then: "Valve has been working with Apple on this since last summer, which shows a high level of technical and business confidence in Apple's VR efforts." The move was announced in a short post on SteamVR's news page, laid out in a single sentence: "SteamVR has ended macOS support so our team can focus on Windows and Linux." Mac users will still have some access to the feature, however, via legacy builds. One door closes, another will surely open. Right?

Read more of this story at Slashdot.

19:03

Oracle faces claims of unequal pay from 4,000+ women after judge upgrades gender gap lawsuit to class action [The Register]

IT giant accused of paying women less than men doing exact same roles

A lawsuit filed against Oracle on behalf of six women seeking to be paid as much as their male colleagues has been certified as a class action – a legal milestone that will allow thousands of women a chance to have their gender discrimination claims heard.…

18:30

EPA Denies Elon Musk's Claims Over Tesla Model S Range Test [Slashdot]

The Environmental Protection Agency has rebuffed comments Tesla CEO Elon Musk made concerning what he calls an error during the Model S Long Range's testing process, which the executive says cost the car a 400-mile range estimate. The agency tells Roadshow it conducted the testing properly. CNET reports: During Tesla's Q1 investor call this week -- which also included some colorful language surrounding stay-at-home orders during the coronavirus pandemic -- Musk said the Model S Long Range should boast a 400-mile range estimate, but instead, the EPA gave it a 391-mile estimate. Why? According to the CEO, at some point during the testing process, someone left the keys inside the car and the door open overnight. The Model S entered a "waiting for driver" mode, which depleted 2% of the EV's range, hence the sub-400-mile rating. Musk added that the company plans to retest the Model S with the EPA and is "confident" the test will produce a 400-mile car. The automaker did not return Roadshow's request for comment on the situation, but an EPA spokesperson said in a statement, "We can confirm that EPA tested the vehicle properly, the door was closed, and we are happy to discuss any technical issues with Tesla, as we do routinely with all automakers." It could very well be that Tesla estimates show the Model S Long Range returns a 400-mile range, but for now, the 391-mile estimate sticks with the EPA. To Tesla's credit, that's still the highest range rating of any electric car currently on the market, and just nine miles off the coveted 400-mile mark.

Read more of this story at Slashdot.

17:57

Bye, Russia: NASA wheels out astronauts, describes plan for first all-American manned launch into orbit since 2011 [The Register]

Demo-2 mission to send SpaceX capsule, rocket from Florida to the International Space Station this month

NASA today introduced to the world the American astronauts set to ride an American rocket into low-Earth orbit from American soil, a journey that will be the first of its kind since the final Space Shuttle launch in 2011.…

17:50

Frontier, Amid Bankruptcy, Is Suspected of Lying About Broadband Expansion [Slashdot]

An anonymous reader quotes a report from Ars Technica: Small Internet providers have asked for a government investigation into Frontier Communications' claim that it recently deployed broadband to nearly 17,000 census blocks, saying the expansion seems unlikely given Frontier's bankruptcy and its historical failure to upgrade networks in rural areas. The accuracy of Frontier's claimed expansion matters to other telcos because the Federal Communications Commission is planning to distribute up to $16 billion to ISPs that commit to deploying broadband in census blocks where there isn't already home Internet service with speeds of at least 25Mbps downstream and 3Mbps upstream. An entire census block can be ruled ineligible for the $16 billion distribution under the FCC's Rural Digital Opportunity Fund (RDOF) even if only one or a few homes in the block have access to 25/3Mbps broadband. Frontier's recent FCC filing lists about 17,000 census blocks in which it has deployed 25/3Mbps broadband since June 2019 and tells the FCC that these census blocks should thus be "removed" from the list of blocks where ISPs can get funding. Frontier reported more new broadband deployments than any other provider that submitted filings in the FCC proceeding. The 17,000 blocks are home to an estimated 400,000 Americans. NTCA -- The Rural Broadband Association, which represents about 850 small ISPs, is skeptical of Frontier's reported deployment. "It may be possible that Frontier did precisely what was necessary to meet the standards for reporting significant increased deployment during this eight-month period in the face of years of historical inaction in these areas, admitted shortcomings on interim universal service buildout obligations, and increasing financial struggles," NTCA told the FCC in a filing on Wednesday. "However, such a remarkable achievement warrants validation and verification given the implications. NTCA therefore urges the commission to immediately investigate the claims of coverage made in the Frontier [filing]." The Rural Broadband Assocation went on to say that its members "serve rural areas in the same states as Frontier and, indeed, they frequently field pleas from consumers living in the latter's service area in need of access to robust broadband service. This experience -- and their decades of experience in serving sparsely populated rural areas of the nation more generally -- have caused NTCA members to question whether the filing accurately reflects conditions on the ground changing so quickly in so many places in such a short time."

Read more of this story at Slashdot.

17:30

QEMU Version 5.0.0 Released [Slashdot]

The developers of the open-source QEMU (Quick EMUlator) emulator, which can run programs on various architectures such as ARM and RISC-V, have released version 5.0. Slashdot reader syn3rg writes: Hot on the heels of the 4.0 release (from a major release perspective), the QEMU team has released version 5.0. This version has many changes, including: Live migration support for external processes running on QEMU D-Bus Support for using memory backends for main/"built-in" guest RAMblock: support for compressed backup images via block jobsARM: support for the following architecture features: ARMv8.1 VHE/VMID16/PAN/PMU ARMv8.2 UAO/DCPoP/ATS1E1/TTCNP ARMv8.3 RCPC/CCIDX ARMv8.4 PMU/RCPCARM: support for Cortex-M7 CPUARM: new board support for tacoma-bmc, Netduino Plus 2, and Orangepi PCMIPS: support for GINVT (global TLB invalidation) instructionPowerPC: 'powernv' machine can now emulate KVM hardware acceleration to run KVM guests while in TCG modePowerPC: support for file-backed NVDIMMs for persistent memory emulationRISC-V: experimental support for v0.5 of draft hypervisor extensions390: support for Adapter Interrupt Suppression while running in KVM mode "Not a current user, but I'm happy to see the project advancing," adds syn3rg. For the full list of changes, you can visit the changelog. QEMU 5.0 can downloaded here.

Read more of this story at Slashdot.

17:10

New Bill Threatens Journalists' Ability To Protect Sources [Slashdot]

A draft bill, first proposed by Sen. Lindsey Graham (R-SC) in January, intends to combat online child exploitation but could introduce significant harm to journalists' ability to protect their sources. TechCrunch reports: Under the Eliminating Abusive and Rampant Neglect of Interactive Technologies (or EARN IT) Act, a government commission would define best practices for how technology companies should combat this type of material. On the surface, EARN IT proposes an impactful approach. A New York Times investigation in September found that "many tech companies failed to adequately police sexual abuse imagery on their platforms." The investigation highlighted features, offered by these companies, that provide "digital hiding places for perpetrators." In reality, the criticized features are exactly the same ones that protect our privacy online. They help us read The Washington Post in private and ensure we only see authentic content created by the journalists. They allow us to communicate with each other. They empower us to express ourselves. And they enable us to connect with journalists so the truth can make the page. This raises the question of whether the bill will primarily protect children or primarily undermine free speech online. It should be pointed out that EARN IT does not try to ban the use of these features. In fact, the bill does not specifically mention them at all. But if we look at how companies would apply the "best practices," it becomes clear that the government is intending to make these features difficult to provide, that the government is looking to discourage companies from offering -- and increasing the use of -- these features. By accepting EARN IT, we will give up our ability -- and our children's future abilities -- to enjoy online, social, connected and private lives. Four of the "best practices" relate to requiring companies to have the ability to "identify" child sexual abuse material. Unfortunately, it's not possible to identify this material without also having the ability to identify any and all other types of material -- like a journalist communicating with a source, an activist sharing a controversial opinion or a doctor trying to raise the alarm about the coronavirus. Nothing prevents the government from later expanding the bill to cover other illegal acts, such as violence or drugs. And what happens when foreign governments want to have a say in what is "legal" and what is not?

Read more of this story at Slashdot.

14:55

Spyware slinger NSO to Facebook: Pretty funny you're suing us in California when we have no US presence and use no American IT services... [The Register]

Malware maker urges judge to dump lawsuit over WhatsApp phone snooping

Israeli spyware maker NSO Group has rubbished Facebook's claim it can be sued in California because it allegedly uses American IT services and has a business presence in the US.…

13:12

Intel Sends Out Rocket Lake Linux Graphics Driver Patches - Confirms Gen12 Platform [Phoronix]

A day after announcing the 10th Gen Core "Comet Lake" S-Series CPUs, the Intel open-source engineers have volleyed their first patches for bringing up the graphics on next-gen Rocket Lake...

12:16

Amazon settles for $11m with workers in unpaid bag-search wait lawsuit [The Register]

Puts to rest claims staff should've been paid for time spent in security lines

Amazon yesterday settled for $11m with staff at its California warehouses who'd sued it over uncompensated wait times for security checks as they began and ended shifts.…

12:04

Proton 5.0-7 Released With New Game Support, Updated VKD3D/DXVK [Phoronix]

Following the Proton 5.0-7 release candidate from a few days ago, this critical part of Valve's Steam Play is now available for weekend gamers...

11:01

Smartphone shipments plummet in Q1 as users, er, lock down their spending [The Register]

Coronavirus + entity lists + people not keen to upgrade = 13% dive

Early forecasts of the Q1 smartphone sector made for grim reading, with appetite expected to be severely suppressed thanks to the COVID-19 pandemic. Subsequent analysis from Canalys shows those forecasts were bang-on, with worldwide shipments into the channel falling by 13 per cent year-on-year, to just over 272 million units.…

10:10

Browse mode: We're not goofing off on the Sidebar of Shame and online shopping sites, says UK's Ministry of Defence [The Register]

Its servers merely record more HTTPS requests to Mail Online and Amazon than anywhere else

Civil servants at the UK's Ministry of Defence are spending a large part of their surfing time gazing at online shopping and news websites, the red-faced government department has admitted.…

09:47

Saturday Morning Breakfast Cereal - Authentic [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Slowly, Taco Bell became the punchline to every episode of SMBC.


Today's News:

09:01

Xiaomi what you're working with: Chinese mobe-flinger proffers two Redmi Note phablets for UK market [The Register]

IR blasters and headphone jacks, likely south of £350? Oh my

Chinese phone-maker Xiaomi's onslaught into the UK market continues with two more phones: the MediaTek Helio G85-powered Redmi Note 9, and the more upmarket Redmi Note 9 Pro, which uses the Qualcomm 720G platform.…

08:46

Thanks Oracle! New Patches Pending Can Reduce Linux Boot Times Up To ~49% [Phoronix]

While many don't look upon Oracle's open-source software contributions too eagerly, some new patches out by their team can dramatically benefit Linux kernel boot times and they are working on getting it upstream. The numbers are already very promising and further work is also underway to make the improvement even more tantalizing...

08:00

Brit magistrates' courts turn to video conferencing to keep wheels of justice turning [The Register]

It's not just Skype and Zoom cashing in on remote-working boom

Britain's courts are moving to their own video-conferencing platform – for criminal trials rather than business meetings.…

07:28

NVIDIA Gets Into Open-Source Hardware With A Ventilator Design [Phoronix]

While waiting to see what NVIDIA will be doing on the open-source driver front that has been pushed back, NVIDIA made a surprise open-source announcement today...

07:15

Bezos to the Moon: Blue Origin joins SpaceX and Dynetics in a three-horse lunar lander race [The Register]

NASA selects three contenders for flag-in-Moon prize

With a scant few years remaining until the agency's 2024 boots-on-the-Moon goal, NASA has named the three US companies that will be dealing with the tricky human landing bit of the mission.…

07:04

AMDGPU TMZ Support Wired Up For Linux 5.8 [Phoronix]

In addition to Intel sending in new feature code to DRM-Next, AMD developers on Thursday also sent in their AMDGPU/AMDKFD feature updates for Linux 5.8...

06:30

$31bn spent on cloudy infrastructure in Q1 on back of employees' mass migration to home working [The Register]

Digital gold rush spurred by global pandemic: Big 4 bag 62% of market

Cloud infrastructure providers are making bank following the mass migration of millions of workers from their offices to their homes, with spending on services leaping by 34.5 per cent in Q1 to $31bn.…

06:22

These pop songs were written by OpenAI’s deep-learning algorithm [MIT Technology Review]

The news: In a fresh spin on manufactured pop, OpenAI has released a neural network called Jukebox that can generate catchy songs in a variety of different styles, from teenybop and country to hip-hop and heavy metal. It even sings—sort of. 

How it works: Give it a genre, an artist, and lyrics, and Jukebox will produce a passable pastiche in the style of well-known performers, such as Katy Perry, Elvis Presley or Nas. You can also give it the first few seconds of a song and it will autocomplete the rest. 

Old songs, new tricks: Computer-generated music has been a thing for 50 years or more, and AIs already have impressive examples of orchestral classical and ambient electronic compositions in their back catalogue. Video games often use computer-generated music in the background, which loops and crescendos on the fly depending on what the player is doing at the time. But it is much easier for a machine to generate something that sounds a bit like Bach than the Beatles. That’s because the mathematical underpinning of much classical music lends itself to the symbolic representation of music that AI composers often use. Despite being simpler, pop songs are different. 

OpenAI trained Jukebox on 1.2 million songs, using the raw audio data itself rather than an abstract representation of pitch, instrument, or timing. But this required a neural network that could track so-called dependencies—a repeating melody, say—across the three or four minutes of a typical pop song, which is hard for an AI to do. To give a sense of the task, Jukebox keeps track of millions of time stamps per song, compared with the thousand time stamps that OpenAI’s language generator GPT-2 uses when keeping track of a piece of writing. 

Chatbot sing-alongs: To be honest, it’s not quite there yet. You will notice that the results, while technically impressive, are pretty deep in the uncanny valley. But while we are still a long way from artificial general intelligence (OpenAI’s stated goal), Jukebox shows once again just how good neural networks are getting at imitating humans, blurring the line between what’s real and what’s not. This week, rapper Jay-Z started legal action to remove deepfakes of him singing Billy Joel songs, for example. OpenAI says it plans to conduct research into the implications of AI for intellectual -property rights. 

06:22

Intel's Cloud Hypervisor 0.7 Adds More Hotplug Capabilities, Musl Libc, SECCOMP Sandbox [Phoronix]

Intel's server software team continues working on Cloud-Hypervisor as a Rust-written hypervisor for modern Linux VMs. Cloud-Hypervisor has been picking up a lot of features and out today is another pre-1.0 feature release...

05:59

Microsoft! Please, put down the rebrandogun. No one else needs to get hurt... But it's too late for Visual Studio Online [The Register]

Now 'Visual Studio Codespaces': Prices sliced, but won't somebody think of the branded swag?

Barely six months on from its grand unveiling, Microsoft is renaming its browser-based code botherer, Visual Studio Online and, more importantly, is trimming its prices.…

05:15

Ride now, ride! Ride for ruin and the world's ending! Mount & Blade II: Bannerlord is here at last! Kind of [The Register]

Crush your enemies, see them driven before you, and hear the lamentations of your partner

The RPG  Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. This one has been intensely anticipated by myself and thousands of others for eight long years. We had abandoned all hope, but now it's here and it's not even finished yet. So without further ado...

05:00

When people pause the Internet goes quiet [The Cloudflare Blog]

When people pause the Internet goes quiet

Recent news about the Internet has mostly been about the great increase in usage as those workers who can have been told to work from home. I've written about this twice recently, first in early March and then last week look at how Internet use has risen to a new normal.

When people pause the Internet goes quiet

As human behaviour has changed in response to the pandemic, it's left a mark on the charts that network operators look at day in, day out to ensure that their networks are running correctly.

Most Internet traffic has a fairly simple rhythm to it. Here, for example, is daily traffic seen on the Amsterdam Internet Exchange. It's a pattern that's familiar to most network operators. People sleep at night, and there's a peak of usage in the early evening when people get home and perhaps stream a movie, or listen to music or use the web for things they couldn't do during the workday.

When people pause the Internet goes quiet

But sometimes that rhythm get broken. Recently we've seen the evening peak by joined by morning peaks as well. Here's a graph from the Milan Internet Exchange. There are three peaks: morning, afternoon and evening.  These peaks seem to be caused by people working from home and children being schooled and playing at home.

When people pause the Internet goes quiet

But there are ways human behaviour shows up on graphs like these.  When humans pause the Internet goes quiet. Here are two examples that I've seen recently.

The UK and #ClapForNHS

Here's a chart of Internet traffic last week in the UK. The triple peak is clearly visible (see circle A). But circle B shows a significant drop in traffic on Thursday, April 23.

When people pause the Internet goes quiet

That's when people in the UK clapped for NHS workers to show their appreciation for those on the front line dealing with people sick with COVID-19.

Ramadan

Ramadan started last Friday, April 24 and it shows up in Internet traffic in countries with large Muslim populations. Here, for example, is a graph of traffic in Tunisia over the weekend. A similar pattern is seen across the Muslim world.

When people pause the Internet goes quiet

Two important parts of the day during Ramadan show up on the chart. These are the iftar and sahoor. Circle A shows the iftar, the evening meal at which Muslims break the fast. Circle B shows the sahoor, the early morning meal before the day's fasting.

Looking at the previous weekend (in green) you can see that the Ramadan-related changes are not present and that Internet use is generally higher (by 10% to 15%).

When people pause the Internet goes quiet

Conclusion

We built the Internet for ourselves and despite all the machine to machine traffic that takes place (think IoT devices chatting to their APIs, or computers updating software in the night), human directed traffic dominates.

I'd love to hear from readers about other ways human activity might show up in these Internet trends.

04:46

Intel Graphics Code Seeing More Tiger Lake Action, Power Efficiency Work For Linux 5.8 [Phoronix]

Intel's graphics driver team continues amassing more changes for Linux 5.8...

04:35

GhostBSD 20.04 Released With Fixes, Updated Kernel [Phoronix]

GhostBSD 20.04 is out as the newest monthly update to this desktop-focused operating system built off the FreeBSD base...

04:33

Three is the magic number, unless you're Apple. That's how many million iPad shipments it was down in Q1 [The Register]

Cupertino hamstrung by Chinese factory closures, Samsung tabs up but wider market shrinks: IDC

Under the shadow of a pandemic that forced the shutdown of Apple's production lines in parts of China, global sales of iPads are tumbling rapidly.…

03:44

The ultimate 4-wheel-drive: How ESA's keeping XMM-Newton alive after 20 years and beyond [The Register]

You thought that yoghurt in the back of fridge was time-expired? Behold X-ray boffinry YEARS past its design-life

Space Extenders  Sure – that telescope can be serviced by Space Shuttle astronauts. But how do you keep one running for years past expiration without a prodding by spacewalkers? Behold ESA's XMM-Newton.…

03:00

Android trojan EventBot abuses accessibility services to clear out bank accounts – fortunately, it's 'in preview' [The Register]

Researchers analysing samples submitted to VirusTotal find new strain

Researchers have analysed a new strain of Android malware that does not yet exist in the wild.…

02:27

Extra knobs and dials for Microsoft's Productivity Score while Azure Active Directory lays on the freebies [The Register]

I always feel like / somebody's watching me

Microsoft is adding to its slightly worryingly named Productivity Score preview with additional granularity and categories.…

02:00

Using mergerfs to increase your virtual storage [Fedora Magazine]

What happens if you have multiple disks or partitions that you’d like to use for a media project and you don’t want to lose any of your existing data, but you’d like to have everything located or mounted under one drive. That’s where mergerfs can come to your rescue!

mergerfs is a union filesystem geared towards simplifying storage and management of files across numerous commodity storage devices.

You will need to grab the latest RPM from their github page here. The releases for Fedora have fc and the version number in the name. For example here is the version for Fedora 31:

mergerfs-2.29.0-1.fc31.x86_64.rpm

Installing and configuring mergerfs

Install the mergerfs package that you’ve downloaded using sudo:

$ sudo dnf install mergerfs-2.29.0-1.fc31.x86_64.rpm

You will now be able to mount multiple disks as one drive. This comes in handy if you have a media server and you’d like all of your media files to show up under one location. If you upload new files to your system, you can copy them to your mergerfs directory and mergerfs will automatically copy them to which ever drive has enough free space available.

Here is an example to make it easier to understand:

$ df -hT | grep disk
/dev/sdb1      ext4      23M  386K 21M 2% /disk1
/dev/sdc1      ext4      44M  1.1M 40M 3% /disk2

$ ls -l /disk1/Videos/
total 1
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv

$ ls -l /disk2/Videos/
total 2
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv
-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv

In this example there are two disks mounted as disk1 and disk2. Both drives have a Videos directory with existing files.

Now we’re going to mount those drives using mergerfs to make them appear as one larger drive.

$ sudo mergerfs -o defaults,allow_other,use_ino,category.create=mfs,moveonenospc=true,minfreespace=1M /disk1:/disk2 /media

The mergerfs man page is quite extensive and complex so we’ll break down the options that were specified.

  • defaults: This will use the default settings unless specified.
  • allow_other: allows users besides sudo or root to see the filesystem.
  • use_ino: Causes mergerfs to supply file/directory inodes rather than libfuse. While not a default it is recommended it be enabled so that linked files share the same inode value.
  • category.create=mfs: Spreads files out across your drives based on available space.
  • moveonenospc=true: If enabled, if writing fails, a scan will be done looking for the drive with the most free space.
  • minfreespace=1M: The minimum space value used.
  • disk1: First hard drive.
  • disk2: Second hard drive.
  • /media: The directory folder where the drives are mounted.

Here is what it looks like:

$ df -hT | grep disk 
/dev/sdb1  ext4           23M      386K 21M 2% /disk1 
/dev/sdc1  ext4           44M      1.1M 40M 3% /disk2 

$ df -hT | grep media 
1:2        fuse.mergerfs  66M      1.4M 60M 3% /media 

You can see that the mergerfs mount now shows a total capacity of 66M which is the combined total of the two hard drives.

Continuing with the example:

There is a 30Mb video called Baby’s second Xmas.mkv. Let’s copy it to the /media folder which is the mergerfs mount.

$ ls -lh "Baby's second Xmas.mkv"
-rw-rw-r--. 1 curt curt 30M Apr 20 08:45 Baby's second Xmas.mkv
$ cp "Baby's second Xmas.mkv" /media/Videos/

Here is the end result:

$ df -hT | grep disk
/dev/sdb1  ext4          23M 386K 21M 2% /disk1
/dev/sdc1  ext4          44M 31M 9.8M 76% /disk2

$ df -hT | grep media
1:2        fuse.mergerfs 66M 31M 30M 51% /media

You can see from the disk space utilization that mergerfs automatically copied the file to disk2 because disk1 did not have enough free space.

Here is a breakdown of all of the files:

$ ls -l /disk1/Videos/
total 1
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv

$ ls -l /disk2/Videos/
total 30003
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv
-rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv
-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv

$ ls -l /media/Videos/
total 30004
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv
-rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv
-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv
-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv

When you copy files to your mergerfs mount, it will always copy the files to the hard disk that has enough free space. If none of the drives in the pool have enough free space, then you won’t be able to copy them.

01:51

Intel is offering more 14nm Skylake desktop processors, we repeat: More 14nm Skylake desktop processors [The Register]

10th-generation Core additions land with up to 10 CPU cores, 5.3GHz max

Intel this week unveiled the desktop processors in its 10th-generation Core series, the headline component being the 10-core i9-10900K that can run up to 5.3GHz.…

01:30

Atlassian to offensively price itself through the post-pandemic patch [The Register]

Claims to be ‘unscathed’ last quarter, will keep hiring and maybe acquiring

Atlassian has re-iterated that its business model is “playing offense in stormy weather” and will use the coronavirus crisis to acquire customers with freebies and maybe make some opportunistic acquisitions.…

01:09

Square peg of modem won't fit into round hole of PC? I saw to it, bloke tells horrified mate [The Register]

In praise of helpful friends and handy tools

On Call  Welcome to another entry in The Register's series of stories extracted from those lucky individuals that find themselves On Call.…

00:32

Uber trials fixed-price hourly rentals for visits to the butcher, the baker and the candlestick-maker [The Register]

Because if you have to go out in a plague, who wants multiple rides?

Uber has started a pilot of pre-paid hourly rentals.…

00:00

Identify and act on high-risk devices – faster [The Register]

Find out more at Forescout live virtual event on May 12

Promo  Not so very long ago, an office network was just that – in an office, connected to a bunch of servers in the cupboard next to the team room.…

Thursday, 30 April

23:28

Dell to unleash hybrid server/storage boxen that can run virtual machines [The Register]

Long-awaited storage consolidation to go hyperconverged lite so that workloads can run next to data

Dell will next week announce a significant refresh and consolidation of its storage range and at the same time try to reinvent storage arrays as computing appliances for data-centric workloads.…

23:03

What's worse than an annoying internet filter? How about one with a pre-auth remote-command execution hole and there's no patch? [The Register]

Bug can be exploited to hijack server, meddle with block lists

Netsweeper's internet filter has a nasty security vulnerability that can be exploited to hijack the host server and tamper with lists of blocked websites. There are no known fixes right now.…

22:11

International space station connects 100Mbps symmetric space laser ethernet using Sony optical disc tech [The Register]

As the Interplanetary Networking Special Interest Group launches discussion of Solar System Internet

The Japan Aerospace Exploration Agency (JAXA) has achieved a 100 Mbps ethernet connection from the International Space Station to earth, using lasers!…

21:31

ICANN finally halts $1.1bn sale of .org registry, says it's 'the right thing to do' after months of controversy [The Register]

Questions linger over what is going on inside DNS overseer

ICANN has vetoed the proposed $1.1bn sale of the .org registry to an unknown private equity firm, saying this was “the right thing to do.”…

20:48

Back when the huge shocking thing that felt like the end of the world was Australia on fire, it turns out telcos held up all right [The Register]

Or as well as they could once the power went out - yet report says reliance on electricity isn't a resilience issue

Back in January when Australia was on fire and the rest of the world wasn’t, locals in the burning zones were advised that the best source of information was emergency services apps. But they were unavailable because mobile networks had gone down.…

19:33

Apple on 2020 so far: OK, so iPhone sales are a bit glum. Wearables, music, apps, vids to the rescue... almost [The Register]

Hope on the horizon, says Cook, but it will take some time to get there

Apple's cash cow looks to be a bit unwell, as iPhone sales took a rare hit this coronavirus-ridden quarter. On the upside, the Cupertino idiot-tax operation banked billions from folks snapping up wearables, music, apps, video and more to kill their lockdown boredom.…

18:00

Linux Gaming, Qt Drama, New Hardware Kept Open-Source Enthusiasts Entertained This Month [Phoronix]

During the course of April while much of the world was in lockdown, there were plenty of interesting happenings in the Linux/open-source and hardware space to keep enthusiasts interested while social distancing from the release of Linux 5.6 to the releases of Fedora 32 and Ubuntu 20.04 LTS, among other milestones...

17:53

Jeff Bezos tells shareholders to buckle up: Amazon to blow this quarter's profits and more on coronavirus costs [The Register]

Cloud-giant-with-a-gift-shop gearing up for the long game

Amazon today reported $75.5bn in revenue for the first quarter of 2020, higher than expected though eroded by exceptional expenses. And it told investors to get used to its free spending ways during the coronavirus pandemic.…

16:48

Quibi, JetBlue, Wish, others accused of leaking millions of email addresses to ad orgs via HTTP referer headers [The Register]

From URL to UR-Hell

Short-video biz Quibi, airline JetBlue, shopping site Wish, and several other companies leaked million of people's email addresses to ad-tracking and analytics firms through HTTP request headers, it is claimed.…

16:35

System76 Releases Pop!_OS 20.04 [Phoronix]

System76 released today Pop!_OS 20.04 as their in-house Linux distribution built off Ubuntu 20.04 LTS but with many customizations on top...

15:11

Faster than reflection: Microsoft previews Source Generators for C# [The Register]

.NET is getting faster but will not be as efficient as C++ or Go. Reason? Legacy code

Microsoft is previewing a new C# compiler feature called a Source Generator that it said will automatically spit out new source code and compile it when you build a project.…

15:05

GNOME 3.37.1 Released As The First Step Towards GNOME 3.38 [Phoronix]

With about a month and a half since GNOME 3.36 debuted, GNOME 3.37.1 is out today as the first development release towards GNOME 3.38 due out this September...

14:00

Couchbase goes cuckoo for Kubernetes with v2.0 release of Autonomous Operator [The Register]

NoSQL or open source, databases cannot help but be drawn to Googly cloud container orchestration system

The latest release from Couchbase finally includes support for Kubernetes, which is becoming something of a de facto standard among databases.…

13:10

More than one-fifth of smartphone sales evaporate in China as pandemic grips Middle Kingdom [The Register]

Where's there a will, there Huawei! America's fave bogeyman does the biz at home, is the only handset maker to grow

Huawei has emerged from China's COVID-19 ravaged smartphone sector in Q1 as the only handset maker to report a local sales bump - not a big one, but it's likely not complaining.…

12:53

Covid-19 and the workforce: Critical workers, productivity, and the future of AI [MIT Technology Review]

In less than two months, covid-19 created arguably the world’s largest collective shift in social activity and working practices. Research firm Global Workplace Analytics estimated in a 2018 report that 4.3 million people in the US worked remotely, representing just 3.2% of the country’s workforce. In a March 2020 poll of 375 executives by MIT Technology Review Insights, over two-thirds reported that more than 80% of their workforce is now working remotely.

As business leaders have sought to safeguard not only the health of staff, but the health and productivity of their companies, the pandemic has thrown up many questions—some that require immediate answers, others that need a longer-term plan. This report explores a new data set, developed by future-of-work software company Faethm, to examine the degree to which “business critical” jobs across industries are “remoteable,” and to what extent those jobs could be supported with artificial intelligence (AI) and automation technologies in the future. Its key findings are as follows: 

  • Directly related to the covid-19 pandemic, between 32 and 50 million US jobs could be increasingly assisted by technology to reduce health risks posed by human interaction and safeguard productivity in a time of crisis. 
  • Rarely, if ever before, have business managers navigated such a confluence of events as the covid-19 outbreak is triggering today, which combines immediate social and economic shocks with potentially repositioning the technology roadmap for their business around AI, automation, and the future of work.
  • Many specialist jobs can benefit from greater augmentation with AI. These include specialist medical roles such as anesthesiologists, nurses, and health technologists. Increased use of technology to augment those roles will likely make them more valuable and resilient in any future pandemic. 
  • Jobs where AI assistance is currently less feasible may be targets for innovation. Roles such as cashiers, servers, and drivers, whose constituent tasks can be fully automated, may be at risk as retailers and restaurants will over time seek to operate with fewer staff.
  • Pandemic preparedness will speed up AI deployment and accelerate the pace of AI innovation in high-risk job categories, causing both “job-positive” and “job-negative” effects. The broad deployment of AI in critical roles across health care and the supply chain will ultimately have a positive impact, making essential jobs safer and more effective, and boosting the readiness of economies such as the US to manage pandemics in the future.

Download the full report.

12:46

GCC 10 Has Been Branched, GCC 10.1 Stable Looking To Release In Early May [Phoronix]

The GNU Compiler Collection 10 stable release (GCC 10.1) is on track for releasing in early May...

12:42

Health systems are in need of radical change; virtual care will lead the way [MIT Technology Review]

The covid-19 pandemic has shown us how much health care is in need of not just tweaking but radical change. The pressure on global health systems, providers, and staff has already been increasing to unsustainable levels. But it also illustrates how much can be achieved in times of crisis: for example, China and the UK recently built thousands of extra beds in intensive care units, or ICUs, in less than two weeks. Health-care reform will need to spur a totally different approach to how care is organized, delivered, and distributed, which will be paramount in a (hopefully soon) post-covid-19 era. It’s the only way to deliver the quadruple aim of health care: better outcomes, improved patient and staff experience, and lower cost of care.

Jeroen Tas is chief information and strategy officer at Philips, and Jan Kimpen is chief medical officer.

What would this change look like? With enormous stress on health-care systems around the globe, it is more urgent than ever before to step up collaboration, information and knowledge sharing, and agility in the delivery of diagnostic, respiratory, and monitoring systems at scale. One of the most powerful ways to achieve this is by building the technology to collect, qualify, and analyze data in ways that quickly reveal patterns and hidden insights. It highlights the need for robust health-data infrastructures.

For example, in the Netherlands, Philips has partnered with Erasmus Medical Center, Jeroen Bosch Hospital, and the Netherlands Ministry of Health, Welfare and Sport to create an online portal that allows Dutch hospitals to share covid-19 patient information with one another. It ensures that a patient’s data is easily and securely transferred via the cloud from hospital A to hospital B. Being able to share patient data between hospitals at the touch of a button is vitally important to optimizing the use of health-care resources. It can, for example, assist in the seamless transfer of infected patients between hospitals to balance the load of critical-care units. Since its launch March 28, 95% of Dutch hospitals have already connected to the portal. In normal times this would have taken years.

How covid-19 is spurring the move to virtual care

A vital instrument for coping with a rapidly spreading infection like covid-19 is virtual care, or telehealth. With the large number of patients involved and the face-to-face risk of infecting other patients and staff, online consultations and remote patient management can provide valuable relief to the health-care system. Philips has made available a dedicated scalable telehealth application that facilitates the use of online patient screening and monitoring, supported by existing call centers. The application aims to prevent unnecessary visits to general practitioners and hospitals by remotely monitoring the vast majority of covid-19 patients who are quarantined at home. Patients infected with covid-19 can be assessed via smart questionnaires about their home situations and states of health. If intervention is needed in any particular case, clinicians will be notified and staff instructed.

During the current pandemic, where covid-19 occasionally results in severe pneumonia, we are seeing increasing numbers of patients requiring acute care in a hospital or an ICU. With numbers swelling to unmanageable proportions in many countries, health-care authorities face not only the challenge of limited numbers of ICU beds and ventilators but also staff shortages and burnout. Trained ICU doctors and nurses are already in short supply and repeated exposure to infected patients will increase their own risk of contracting the virus.

With the large number of patients involved and the face-to-face risk of infecting other patients and staff, online consultations and remote patient management can provide valuable relief to the health-care system.

A tele-ICU, or e-ICU, enables a co-located multi-disciplinary team of intensivists and critical-care nurses to remotely monitor patients in the ICU regardless of where patients are. Intensivists and nurses based in the telehealth e-ICU hub are supported by high-definition cameras, telemetry, predictive analytics, data visualization, and advanced reporting capabilities to support their frontline colleagues. Algorithms alert them to signs of patient deterioration or improvement. They help care teams to proactively intervene at an earlier stage or decide which patients have stabilized and can be transferred, allowing scarce ICU beds to be allocated to more acute patients. The tele-ICU can be embedded in a larger clinical and operations center that prioritizes patients on acuity and optimizes patient flow and logistics. This not only supports front-line staff to drive better patient outcomes but also helps optimize scarce resources.

Delivering predictive care beyond hospital walls

Advanced telemetry and camera technologies hold out the promise of monitoring acute patients at scale. In the near future, you can expect image analysis software that measures an ICU patient’s temperature, heart rate, and respiration rate from a distance of several meters. Using existing patient monitoring solutions, artificial intelligence (AI) is already able to use the acquired data to predict when a patient’s condition is about to deteriorate hours before a nurse would be able to spot it. And when patients do deteriorate, secure connections to external clinical experts allow appropriate treatment plans to be initiated. When there are staff shortages, such as reduced availability of night-shift staff, monitoring can be done from remote locations, even halfway across the world, where people are wide awake. At Philips, we are already deploying vital-signs camera technology to detect deterioration in patients waiting in emergency department waiting rooms so they can receive quick attention.

The remote monitoring approach can also be extended to the home, with smart wearables tracking patients who are infected or at risk of infection. These wearables, like the Philips smart biosensor patch, can measure body temperature, respiration, and heart rate, monitor sleep, and detect falls. All these measurements can be combined with contextual and behavioral information about patients to keep them as safe as possible.

Virtual care can also support other overburdened health-care fields, such as diagnosis. Ultrasound, x-ray, and CT scans are useful tools for covid-19 diagnosis and follow-up, but if patient numbers grow rapidly, the radiologists who assess them will be confronted by a much-increased workload. If there are not enough radiologists available in a hospital to take care of the local population, tele-radiology services can potentially offer them needed support. Radiologists from one hospital can remotely support their colleagues in another. Even so, with so many new covid-19 cases in countries around the world, patients often have to wait hours before getting the results. AI-enabled CT image analysis could potentially help to screen suspected covid-19 patients within minutes. This in turn could relieve pressure on complex laboratory-based tests to confirm the presence of coronavirus. But there are some tough challenges to be solved before an algorithm can distinguish influenza from covid-19.

A raft of innovative ideas and coping strategies are being tested worldwide. In the near future, we could see digital services closing the loop between consultations and the dispatch of care or prescription drugs, drones as vehicles for getting drugs to patients, and robots disinfecting contaminated areas. Apps and chat-bots that act as symptom checkers and provide up-to-the-minute travel and infection control advice. 5G-enabled cameras that check for symptoms in seconds. New ways of working that keep diagnostic procedures safe while still allowing fast assessment, such as the robot-guided ultrasound being trialed in China. Anything to keep the risk of disease transmission down to a minimum. Although these innovations won’t play global roles in the current crisis, keep an eye on them. Many health systems may go back to the drawing board to improve their care based on today’s experiences.

At Philips, we think the most important thing right now is that we work together to put the right measures in place on a global scale, with countries that have made it beyond the peak helping those in the middle of the pandemic. What we learn in the process will allow us to better predict and prepare for the future. One thing is certain, AI and virtual care, which are relatively new concepts to much of society, will play their part in combating the covid-19 pandemic. Experiencing these technologies firsthand will undoubtedly help shape the debate about their future role in health care—and what it means for all of us.

This article is a combination of two blog posts that first appeared on Philips.com: “Going virtual to combat COVID-19” by Jeroen Tas, and “How will COVID-19 change the working lives of doctors and nurses?” by Jan Kimpen.

12:18

Human intelligence may not be enough: US military turns to machine learning algos to predict food shortages [The Register]

Supply chain issues will be hit hard as workers get sick

Analysis  The US Department of Defense is building machine learning tools to help predict critical food and medicine shortages as America grapples with the coronavirus pandemic.…

11:29

Tesla sued over Tokyo biker's death in 'dozing driver' Autopilot crash [The Register]

Motorcyclist had stopped to help with a separate traffic accident, say court docs

Tesla is being sued by the widow and daughter of a man killed when an allegedly dozing driver let his Model X’s Autopilot feature steer it into a group of people.…

10:21

We're not Finnished yet: Nokia chalks up €200m sales hit to 'COVID-19 issues' [The Register]

Insists: It was the supply chain! We'll get the sales back later this year

Nokia Oyj told the market this morning that it estimates the novel coronavirus has "had an approximately €200m negative impact" on its Q1 2020 sales, mostly due to "supply chain challenges" but insisted the sales would be "shifted to future periods", rather than being lost to the ledger entirely.…

09:37

Google is a 'publisher' says Aussie court as it hands £20k damages to gangland lawyer [The Register]

Chocolate Factory held liable for words on its website

An Australian court has declared that Google is a "publisher" and awarded an aggrieved lawyer £20,000 after searches on his name returned criminal allegations from his past.…

09:26

DDoS attacks have evolved, and so should your DDoS protection [The Cloudflare Blog]

DDoS attacks have evolved, and so should your DDoS protection
DDoS attacks have evolved, and so should your DDoS protection

The proliferation of DDoS attacks of varying size, duration, and persistence has made DDoS protection a foundational part of every business and organization’s online presence. However, there are key considerations including network capacity, management capabilities, global distribution, alerting, reporting and support that security and risk management technical professionals need to evaluate when selecting a DDoS protection solution.

Gartner’s view of the DDoS solutions; How did Cloudflare fare?

Gartner recently published the report Solution Comparison for DDoS Cloud Scrubbing Centers (ID G00467346), authored by Thomas Lintemuth, Patrick Hevesi and Sushil Aryal. This report enables customers to view a side-by-side solution comparison of different DDoS cloud scrubbing centers measured against common assessment criteria.  If you have a Gartner subscription, you can view the report here. Cloudflare has received the greatest number of ‘High’ ratings as compared to the 6 other DDoS vendors across 23 assessment criteria in the report.

The vast landscape of DDoS attacks

From our perspective, the nature of DDoS attacks has transformed, as the economics and ease of launching a DDoS attack has changed dramatically. With a rise in cost-effective capabilities of launching a DDoS attack, we have observed a rise in the number of under 10 Gbps DDoS network-level attacks, as shown in the figure below. Even though 10 Gbps from an attack size perspective does not seem that large, it is large enough to significantly affect a majority of the websites existing today.

DDoS attacks have evolved, and so should your DDoS protection

At the same time, larger-sized DDoS attacks are still prevalent and have the capability of crippling the availability of an organization’s infrastructure. In March 2020, Cloudflare observed numerous 300+ Gbps attacks with the largest attack being 550 Gbps in size.

DDoS attacks have evolved, and so should your DDoS protection

In the report Gartner also observes a similar trend, “In speaking with the vendors for this research, Gartner discovered a consistent theme: Clients are experiencing more frequent smaller attacks versus larger volumetric attacks.” In addition, they also observe that “For enterprises with Internet connections up to and exceeding 10 Gbps, frequent but short attacks up to 10 Gbps are still quite disruptive without DDoS protection. Not to say that large attacks have gone away. We haven’t seen a 1-plus Tbps attack since spring 2018, but attacks over 500 Gbps are still common.”

Gartner recommends in the report to “Choose a provider that offers scrubbing capacity of three times the largest documented volumetric attack on your continent.”

From an application-level DDoS attack perspective an interesting DDoS attack observed and mitigated by Cloudflare last year, is shown below. This HTTP DDoS attack had a peak of 1.4M requests per second, which isn’t highly rate-intensive. However, the fact that the 1.1M IPs from which the attack originated were unique and not spoofed made the attack quite interesting. The unique IP addresses were actual clients who were able to complete a TCP and HTTPS handshake.

DDoS attacks have evolved, and so should your DDoS protection

Harness the full power of Cloudflare’s DDoS protection

Cloudflare’s cloud-delivered DDoS solution provides key features that enable security professionals to protect their organizations and customers against even the most sophisticated DDoS attacks. Some of the key features and benefits include:

  • Massive network capacity: With over 35 Tbps of network capacity, Cloudflare ensures that you are protected against even the most sophisticated and largest DDoS attacks. Cloudflare’s network capacity is almost equal to the total scrubbing capacity of the other 6 leading DDoS vendors combined.
  • Globally distributed architecture: Having a few scrubbing centers globally to mitigate DDoS attacks is an outdated approach. As DDoS attacks scale and individual attacks originate from millions of unique IPs worldwide, it’s important to have a DDoS solution that mitigates the attack at the source rather than hauling traffic to a dedicated scrubbing center. With every one of our data centers across 200 cities enabled with full DDoS mitigation capabilities, Cloudflare has more points of presence than the 6 leading DDoS vendors combined.
  • Fast time to mitigation: Automated edge-analyzed and edge-enforced DDoS mitigation capabilities allows us to mitigate attacks at unprecedented speeds. Typical time to mitigate a DDoS attack is less than 10s.
  • Integrated security: A key design tenet while building products at Cloudflare is integration. Our DDoS solution integrates seamlessly with other product offerings including WAF, Bot Management, CDN and many more. A comprehensive and integrated security solution to bolster the security posture while aiding performance. No tradeoffs between security and performance!
  • Unmetered and unlimited mitigation: Cloudflare offers unlimited and unmetered DDoS mitigation. This eliminates the legacy concept of ‘Surge Pricing,’ which is especially painful when a business is under duress and experiencing a DDoS attack. This enables you to avoid unpredictable costs from traffic.

Whether you’re part of a large global enterprise, or use Cloudflare for your personal site, we want to make sure that you’re protected and also have the visibility that you need. DDoS Protection is included as part of every Cloudflare service. Enterprise-level plans include advanced mitigation, detailed reporting, enriched logs, productivity enhancements and fine-grained controls. Enterprise Plan customers also receive access to dedicated customer success and solution engineering.

To learn more about Cloudflare’s DDoS solution contact us or get started.

*Gartner “Solution Comparison for DDoS Cloud Scrubbing Centers,” Thomas Lintemuth,  Patrick Hevesi, Sushil Aryal, 16 April 2020

09:00

Saturday Morning Breakfast Cereal - Weak [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Looking at you, Gyruss. Hand-eye coordination my ass.


Today's News:

08:58

Sun shines on ServiceNow amid pandemic storm after belated spree of $1m+ deals [The Register]

Always be closing, especially when the economy's in a tailspin

Workflow wizard ServiceNow seems to have dodged the market glitch at the end of Q1 and secured deals sufficient to beat guidance with its results.…

08:56

Redis 6.0 Released As A Big Update For This In-Memory Key-Value Database [Phoronix]

Redis 6.0 is out to end out April as this widely-used, open-source in-memory key-value database solution...

07:59

Virtual meetings in Animal Crossing are so last month. Behold the virtual computer museum [The Register]

Social simulation in an era of social distancing

News reaches Vulture Central of more retro computing goodness courtesy of the hit game Animal Crossing and an enterprising member of staff at the currently shuttered Centre for Computing History.…

07:29

Microsoft unveils simpler, easier Windows Virtual Desktop: You no longer need to be a VDI expert to make this work [The Register]

Also: what does the Windows giant have in common with the Boomtown Rats? Neither seems keen on Mondays

Microsoft is having a crack at simplifying Windows Virtual Desktop while rolling out support for more operating systems.…

07:00

Intel Announces 10th Gen Core S-Series CPUs, Led By The Core i9 10900K [Phoronix]

Intel today is announcing their 10th Gen Core "Comet Lake" S-Series processors led by the Core i9 10900 series that the company claim is now the world's fastest gaming processor and offers clock speeds up to 5.3GHz.

06:55

Process miner Celonis pushes out application tools to tighten up how they're used in anger [The Register]

Better look into how users don't use software the way biz thinks they do

Celonis has carved a niche by selling companies such as Siemens, 3M, Airbus and Vodafone the software and analytics techniques to "X-ray" their business processes in the hope of iroing out kinks to save time and money.…

06:10

Lars Ulrich makes veiled threats of another Metallica album during web chat with Salesforce CEO Marc Benioff [The Register]

You know what, Lars? We're OK

Lars Ulrich, drummer of corporate shlock rock merchants Metallica, has threatened that the thrash metal giants could make a new album despite the coronavirus lockdown as he wagged chins with Salesforce chief Marc Benioff.…

05:35

Salt peppered with holes? Automation tool vulnerable to auth bypass: Patch now [The Register]

'The impact is full remote command execution as root on both master and all minions'

The Salt configuration tool has patched two vulnerabilities whose combined effect was to expose Salt installations to complete control by an attacker. A patch for the issues was released last night, but systems that are not set to auto-update may still be vulnerable.…

05:00

Vodafone issues a stay of execution for Demon domain hold-outs [The Register]

Expiration pushed to 1 September due to COVID-19 challenges

Vodafone has declared a stay of execution for the venerable Demon sub domain, extending the licence until September 2020.…

05:00

Empowering our Customers and Service Partners [The Cloudflare Blog]

Empowering our Customers and Service Partners

Last year, Cloudflare announced the planned expansion of our partner program to help managed and professional service partners efficiently engage with Cloudflare and join us in our mission to help build a better Internet. Today, we want to highlight some of those amazing partners and our growing support and training for MSPs around the globe. We want to make sure service partners have the enablement and resources they need to bring a more secure and performant Internet experience to their customers.

This partner program tier is specifically designed for professional service firms and Managed Service Providers (MSPs and MSSPs) that want to build value-added services and support Cloudflare customers. While Cloudflare is hyper-focused on building highly scalable and easy to use products, we recognize that some customers may want to engage with a professional services firm to assist them in maximizing the value of our offerings. From building Cloudflare Workers, implementing multi-cloud load balancing, or managing WAF and DDoS events, our partner training and support enables sales and technical teams to position and support the Cloudflare platform as well as enhance their services businesses.

Training

Our training and certification is meant to help partners through each stage of Cloudflare adoption, from discovery and sale to implementation, operation and continuous optimization. The program includes hands-on education, partner support and success resources, and access to account managers and partner enablement engineers.  

  • Accredited Sales Professional - Learn about key product features and how to identify opportunities and find the best solution for customers.
  • Accredited Sales Engineer - Learn about Cloudflare’s technical differentiation that drives a smarter, faster and safer Internet.
  • Accredited Configuration Engineer - Learn about implementation, best practices, and supporting Cloudflare.
  • Accredited Services Architect - Launching in May, our Architect accreditation dives deeper into cybersecurity management, performance optimization, and migration services for Cloudflare.
  • Accredited Workers Developer (In Development) - Learn how to develop and deploy serverless applications with Cloudflare Workers.
Empowering our Customers and Service Partners
Cloudflare Partner Accreditation

Service Opportunities

Over the past year, the partners we’ve engaged with have found success throughout Cloudflare’s lifecycle by helping customers understand how to transform their network in their move to hybrid and multi-cloud solutions, develop serverless applications, or manage the Cloudflare platform.

Network Digital Transformations

“Cloudflare is streamlining our migration from on-prem to the cloud. As we tap into various public cloud services, Cloudflare serves as our independent, unified point of control — giving us the strategic flexibility to choose the right cloud solution for the job, and the ability to easily make changes down the line.” — Dr. Isabel Wolters, Chief Technology Officer, Handelsblatt Media Group

Serverless Architecture Development

"At Queue-it we pride ourselves on being the leading developer of virtual waiting room technology, providing a first-in, first-out online waiting system. By partnering with Cloudflare, we've made it easier for our joint customers to bring our solution to their applications through Cloudflare Apps and our Cloudflare Workers Connector that leverages the power of edge computing."  - Henrik Bjergegaard, VP Sales, Queue-It

Managed Security & Insights

“Opticca Security supports our clients with proven and reliable solutions to ensure business continuity and protection of your online assets. Opticca Security has grown our partnership with Cloudflare over the years to support the quick deployment, seamless integration, and trusted expertise of Cloudflare Security solutions, Cloudflare Workers, and more." -- Joey Campione, President, Opticca Security

Partner Showcase - Zilker Technology

We wanted to highlight the success of one of our managed service partners who, together with Cloudflare, is delivering a more secure, more high performing and more reliable Internet experience for customers.

Empowering our Customers and Service Partners

Zilker Technology engaged Cloudflare when one of their eCommerce clients, the retail store of a major NFL team, was facing carding attacks and other malicious activity on their sites. "Our client activated their Cloudflare subscription on a Thursday, and we were live with Cloudflare in production the following Tuesday, ahead of Black Friday later that same week," says Drew Harris, Director of Managed Services for Zilker. "It was crazy fast and easy!"

Carding - also known as credit card stuffing, fraud or verification, happens when cyber criminals attempt to make small purchases with large volumes of stolen credit card numbers on one eCommerce platform.

In addition to gaining the enhanced security and protection from Cloudflare WAF, advanced DDOS protection, and rate-limiting, Zilker replaced the client's legacy CDN with Cloudflare CDN, improving site performance and user experience. Zilker provides full-stack managed services and 24/7 support for the client, including Cloudflare monitoring and management.  

“Partnering with Cloudflare gives us peace of mind that we can deliver on customer expectations of security and performance all the time, every day. Even as new threats emerge, Cloudflare is one step ahead of the game,” says Matthew Fox, VP of Business Development.

Just getting started

Cloudflare is committed to making our service partners successful to ensure our customers have the best technology and expertise available to them as they accelerate and protect their critical applications, infrastructure, and teams. As Cloudflare grows our product set, we’ve seen increased demand for the services provided by our partners. Cloudflare is excited and grateful to work with amazing agencies, professional services firms and managed security providers across the globe. The diverse Cloudflare Partner Network is essential to our mission of helping to build a better Internet, and we are dedicated to the success of our partners. We’ll continue our commitment to our customers and partners that Cloudflare will be the easiest and most rewarding solution to implement with partners.

More Information:

04:48

AMDVLK 2020.Q2.2 Flips On The Pipeline Binary Cache, Tunes SoTR Performance [Phoronix]

AMDVLK 2020.Q2.2 has been issued today as the company's latest open-source AMD Radeon Vulkan driver based off their official driver source tree...

04:36

Raspberry Pi Announces The $50 High Quality Camera [Phoronix]

Raspberry Pi today announced their newest product, the High Quality Camera, which starts at $50 and supports interchangeable lenses...

04:15

Prank warning: You do know your smart speaker's paired with Spotify over the internet, don't you? [The Register]

I can't stop people playing music at me, says Reg reader

If you let your mates pair their Spotify accounts with your smart speakers, beware – the connection persists across the internet, not just across your home Wi-Fi network, as some assumed.…

04:00

Covid hoaxes are using a loophole to stay alive—even after content is deleted [MIT Technology Review]

Since the onset of the pandemic, the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center, where I am the director, has been investigating how misinformation, scams, and conspiracies about covid-19 circulate online. If fraudsters are now using the virus to dupe unsuspecting individuals, we thought, then our research on misinformation should focus on understanding the new tactics of these media manipulators. What we found was a disconcerting explosion in “zombie content.”

In April, Amelia Acker, assistant professor of information studies at UT Austin, brought our attention to a popular link containing conspiratorial propaganda suggesting that China is hiding important information about covid-19. 

The News NT website

The original post was from a generic-looking site called News NT, alleging that 21 million people had died from covid-19 in China. That story was quickly debunked, and according to data from Crowdtangle (a metric and engagement product owned by Facebook), the original link was not popular, garnering only 520 interactions and 100 shares on Facebook. Facebook, in turn, placed a fact-checking label on this content, which limits its ranking in the algorithmic systems for news feed and search. But something else was off about the pattern of distribution.

CrowdTangle’s results for the deleted News NT story available via the Wayback Machine

While the original page failed to spread fake news, the version of the page saved on the Internet Archive’s Wayback Machine absolutely flourished on Facebook. With 649,000 interactions and 118,000 shares, the Wayback Machine’s link achieved much greater engagement than legitimate press outlets. Facebook has since placed a fact-check label over the link to the Wayback Machine version too, but it had already been seen a huge number of times. 

There are several explanations for this hidden virality. Some people use the Internet Archive to evade blocking of banned domains in their home country, but it is not simply about censorship.  Others are seeking to get around fact-checking and algorithmic demotion of content.

Many of the Facebook shares are to right-wing groups and pages in the US, as well as to groups and pages critical of China in Pakistan and Southeast Asia. The most interactions on the News NT Wayback Machine link comes from a public Facebook group, Trump for President 2020, which is administered by Brian Kolfage. He is best known as the person behind the controversial nonprofit We Build the Wall. Using the technique of keyword squatting, this page has sought to capture those seeking to join Facebook groups related to Trump. It now has nearly 240,000 members, and the public group has changed its name several times— from “PRESIDENT DONALD TRUMP [OFFICIAL]” to “President Donald Trump ✅ [OFFICIAL]” then “The Deplorable”s ✅” and finally “Trump For President 2020.” By claiming to be Trump’s “official” page and using an impostor check mark, groups like this can engender trust among an already polarized public. 

When looking for more evidence of hidden virality, we searched for “web.archive.org” across platforms. Unsurprisingly, Medium posts that were taken down for spreading health misinformation have found new life through Wayback Machine links. One deleted Medium story, “Covid-19 had us all fooled, but now we might have finally found its secret,” violated Medium’s policies on misleading health information. Before Medium’s takedown, the original post amassed 6,000 interactions and 1,200 shares on Facebook, but the archived version is vastly more popular—1.6 million interactions, 310,000 shares, and still climbing. This zombie content has better performance than most mainstream media news stories, and yet it exists only as an archived record.

Data from Crowdtangle on the original Medium post and on the archived version

Perhaps the most alarming element to a researcher like me is that these harmful conspiracies permeate private pages and groups on Facebook. This means researchers have access to less than 2% of the interaction data, and that health misinformation circulates in spaces where journalists, independent researchers, and public health advocates cannot assess it or counterbalance these false claims with facts. Crucially, if it weren’t for the Internet Archive’s records we would not be able to do this research on deleted content in the first place, but these use cases suggest that the Internet Archive will soon have to address how its service can be adapted to deal with disinformation.

Hidden virality is growing in places where WhatsApp is popular, because it’s easy to forward misinformation through encrypted channels and evade content moderation. But when hidden virality happens on Facebook with health misinformation, it is particularly disconcerting. More than 50% of Americans rely on Facebook for their news, and still, after many years of concern and complaint, researchers have a very limited window into the data. This means it’s nearly impossible to ethically investigate how dangerous health misinformation is shared on private pages and groups. 

All this poses a different threat than political or news misinformation, because people do quickly change their behavior on the basis of medical recommendations. 

Throughout the last decade of researching platform politics, I have never witnessed such collateral damage to society caused by unchecked abusive content spread across the web and social media. Everyone interested in fostering the health of the population should strive to hold social-media companies to account in this moment. As well, social-media companies should create a protocol for strategic amplification that defines successful recommendations and healthy news feeds as those maximizing respect, dignity, and productive social values, while looking to independent researchers and librarians to identify authoritative content, especially when our lives are at stake. 

03:30

Cheshire Police celebrates three-year migration to Oracle Fusion by lobbing out tender for system to replace it... one year later [The Register]

Fighting crimes in between upgrading databases

Updated  Cheshire cops have begun tendering for a new £11 million ERP system just a 12 months after the current one - Oracle Fusion - went live following a three-year migration.…

02:50

Indian IT outsourcing giant Wipro picks Nutanix to help tame Oracle and SQL Server [The Register]

Managed databases that feel like they're caressed in cloud

Wipro and Nutanix have bonded over managing databases after the Indian services company created a new range of “Digital Database Services” based on the hyperconverged upstart’s tooling.…

02:15

You can get a mechanical keyboard for £45. But should you? We pulled an Aukey KM-G6 out of the bargain bin [The Register]

And it's not terrible

Mechanical keyboards were once a niche commodity owned primarily by enthusiasts who were all too happy to pay top dollar. Now it's possible to get one for as little as £25 on Amazon, thanks to China's prolific factories and the availability of cheap Cherry-clone key switches.…

01:27

Linux 5.5 vs. 5.6 vs. 5.7 Kernel Benchmarks With The Intel Core i9 10980XE [Phoronix]

Besides those systems now seeing Schedutil by default as the CPU frequency scaling governor and some Radeon gaming performance gains to note, the performance of Linux 5.7 in our testing thus far has largely been on track with Linux 5.6 stable...

01:26

Alibaba takes VMware where AWS and Microsoft don't – behind the Great Firewall [The Register]

Only in China ‘for the moment’ which leaves the world to conquer

VMware’s partnership with Alibaba Cloud has borne fruit behind the Great Firewall…

00:54

Red Hat’s new CEO on surviving inside Big Blue: 'We don’t participate in IBM's culture. It’s that simple' [The Register]

Paul Cormier talks hybrid cloud growth and independence with El Reg

Interview  Red Hat’s new CEO is feeling confident. It’s a pretty good time to be the head of a company whose entire business is virtual: virtual machines, hybrid cloud, operating system support, Kubernetes containers. These are boom times.…

Wednesday, 29 April

23:56

AMD AOMP 11.5 Released For OpenMP Offloading To Radeon GPUs [Phoronix]

Released on Wednesday was AOMP 11.5 as the latest version of the AMD/ROCm compiler based off LLVM Clang and focused on OpenMP offloading to Radeon GPUs...

22:02

X.Org Board Elections Wrap Up For 2020 [Phoronix]

The X.Org Board of Directors elections wrapped up this week with four new members now serving this organization that oversees the X.Org Server, Mesa, Wayland, and other critical Linux desktop infrastructure...

18:45

Mesa 20.1 Feature Development Ends With RC1 Released [Phoronix]

Mesa 20.1 feature development is now over with it being branched from Git master and subsequently Mesa 20.1-RC1 being released this evening...

16:51

Virginia Tech's "Popcorn Linux" For Distributed Thread Execution Seeking Feedback, Possible Upstreaming [Phoronix]

Popcorn Linux has been a multi-year effort out of Virginia Tech's Software and Systems Research Group for distributed thread execution across systems and even potentially different ISAs/accelerators given today's heterogeneous hardware...

15:20

Remdesivir seems to shorten covid hospital stays and may save lives [MIT Technology Review]

The good news started trickling out early this morning, first in a vague company press release and then, by midday, from the White House.

A drug called remdesivir appears to actually work against the coronavirus that causes covid-19.

The news was delivered to President Donald Trump by Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases (NIAID), who said covid-19 patients who received the antiviral drug recovered 31% faster, in 11 days instead of 15, in a placebo-controlled trial carried out by his agency.

“Even though it doesn’t seem like a knockout 100%, it is a very important proof of concept. What it has proven is that a drug can block this virus,” Fauci said while seated on a couch in the Oval Office, with Trump looking on.

The finding that remdesivir helps covid-19 patients is a step toward getting us out of the health and economic disasters caused by the pandemic, and fuels hopes that more will follow. Later in the summer new antibody drugs to battle the infection could be available, and vaccines could arrive in the months after that.

Scott Gottlieb, former commissioner of the Food and Drug Administration, told CNBC that remdesivir could be part of a “robust toolbox,” including better testing, that “could mitigate the risk, and the fear that this is a race to the bottom and that there is nothing there to help you.” 

“I think all of this is going to put us in a much different posture for the fall,” Gottlieb said, “and allow us to get back to some semblance of our normal lives, even with covid circulating in the background.”

Remdesivir is made by Gilead Sciences, a biotechnology company based in Foster City, California, whose previous successes included a blockbuster cure for hepatitis C.

The clinical data still has not been published, and a report from China saw no benefit in severe cases. But if the new findings hold up, remdesivir will quickly become the mainstay treatment for covid-19.

Emergency approval by the FDA is likely, and you can expect a huge scramble by Gilead to make enough of the medicine.

In his comments, Fauci compared the early result to the introduction of AZT, one of the first medicines to prove modestly successful in treating HIV, the virus that causes AIDS. After AZT, he said, ever more effective medications became available. 

Besides the shorter recovery times, there were also signs remdesivir reduced the chance of dying from covid-19, but that data was not as definitive. The death rate for patients who got the drug was 8%, versus 11.6% for those who didn’t. Either way, the figures were a reminder of the significant chance that anyone admitted to the hospital because of covid-19 respiratory problems will die.

The NIAID study started on February 21. Half the patients got Gilead’s remdesivir and half got a placebo drug.

The first patient to join the trial was an American who caught the virus on the Diamond Princess cruise ship and was treated at the University of Nebraska. In total, about 1,000 patients joined the study, at sites in the US, Germany, Spain, Greece, and the United Kingdom, among other locations.

The reason remdesivir was available to try against covid-19 so quickly is that it is a repurposed drug—meaning it’s already been studied for other uses. Previous lab experiments showed it could block the SARS virus in the lab, and it was tried on patients with Ebola. The drug inhibits a molecule that RNA viruses need to make new copies of themselves.

On April 4, Gilead CEO Daniel O’Day said the company had enough of the drug on hand or nearly finished to treat 140,000 people and that those doses were being used in clinical trials or being offered to patients “at no charge.”

Huge questions now surround issues like how to distribute doses fairly, and what the medicine could cost. An explosive debate over who gets the drug, including what countries get the limited supplies, seems unavoidable.

The manufacturing of remdesivir occurs in stages—chemicals are added in synthetic steps—and it takes several months to create a batch.

Gilead said it had stepped up production and could make enough of the drug to treat a million people by the end of the year. However, more than three times that many cases have already been confirmed, and at least 224,000 people have died from the disease as of April 29.

More study is needed to pinpoint the optimal dose and the types of patients who benefit most.

A group of researchers said in a white paper released this week that the US government should allow more companies to manufacture the drug and take steps to repurpose chemical production lines to make large amounts of it.

That group, which includes Harvard University chemist Stuart Schreiber, says remdesivir should be given soon after symptoms start, and in higher doses. “We speculate that the current dose is chosen because of limited supplies. We urge the government to determine the facts around this issue so optimal trial doses for efficacy can be determined,” they wrote.

Gilead separately reported results of a study in which five-day and 10-day courses of remdesivir seemed to have similar effects. Shorter courses would be a way to ration supplies.

13:50

AMD Programmer Manual Update Points To PCID Support, Memory Protection Keys [Phoronix]

It looks like AMD Zen 3 CPUs will finally be supporting PCID! And memory protection keys are coming too, at least according to AMD's latest programmer reference manual...

12:55

For Radeon Gamers On Ubuntu 20.04 LTS, It's Generally Worthwhile Flipping On RADV's ACO [Phoronix]

A premium supporter was asking this week whether for those newly-upgraded to Ubuntu 20.04 LTS if the graphics stack is in good enough shape or if I would recommend running Mesa 20.1-devel for better AMD Linux gaming performance... The short answer, sans any particular changes you are after in Mesa 20.1-devel, the bigger gain for running on this new Ubuntu release is to instead enable RADV+ACO as a much more pressing boost...

10:52

Saturday Morning Breakfast Cereal - Social Desirability [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you happen to be a Future Hitler, you can purchase an indulgence via the SMBC store.


Today's News:

09:00

Facebook claims its new chatbot beats Google’s as the best in the world [MIT Technology Review]

For all the progress that chatbots and virtual assistants have made, they’re still terrible conversationalists. Most are highly task-oriented: you make a demand and they comply. Some are highly frustrating: they never seem to get what you’re looking for. Others are awfully boring: they lack the charm of a human companion. It’s fine when you’re only looking to set a timer. But as these bots become increasingly popular as interfaces for everything from retail to health care to financial services, the inadequacies only grow more apparent.

Now Facebook has open-sourced a new chatbot that it claims can talk about nearly anything in an engaging and interesting way. Blender could not only help virtual assistants resolve many of their shortcomings but also mark progress toward the greater ambition driving much of AI research: to replicate intelligence. “Dialogue is sort of an ‘AI complete’ problem,” says Stephen Roller, a research engineer at Facebook who co-led the project. “You would have to solve all of AI to solve dialogue, and if you solve dialogue, you’ve solved all of AI.”

Blender’s ability comes from the immense scale of its training data. It was first trained on 1.5 billion publicly available Reddit conversations, to give it a foundation for generating responses in a dialogue. It was then fine-tuned with additional data sets for each of three skills: conversations that contained some kind of emotion, to teach it empathy (if a user says “I got a promotion,” for example, it can say, “Congratulations!”); information-dense conversations with an expert, to teach it knowledge; and conversations between people with distinct personas, to teach it personality. The resultant model is 3.6 times larger than Google’s chatbot Meena, which was announced in January—so big that it can’t fit on a single device and must run across two computing chips instead.

Facebook Blender chat log
An example of a conversation between a human and Blender.
FACEBOOK

At the time, Google proclaimed that Meena was the best chatbot in the world. In Facebook’s own tests, however, 75% of human evaluators found Blender more engaging than Meena, and 67% found it to sound more like a human. The chatbot also fooled human evaluators 49% of the time into thinking that its conversation logs were more human than the conversation logs between real people—meaning there wasn’t much of a qualitative difference between the two. Google hadn’t responded to a request for comment by the time this story was due to be published.

Despite these impressive results, however, Blender’s skills are still nowhere near those of a human. Thus far, the team has evaluated the chatbot only on short conversations with 14 turns. If it kept chatting longer, the researchers suspect, it would soon stop making sense. “These models aren’t able to go super in-depth,” says Emily Dinan, the other project leader. “They’re not able to remember conversational history beyond a few turns.”

Blender also has a tendency to “hallucinate” knowledge, or make up facts—a direct limitation of the deep-learning techniques used to build it. It’s ultimately generating its sentences from statistical correlations rather than a database of knowledge. As a result, it can string together a detailed and coherent description of a famous celebrity, for example, but with completely false information. The team plans to experiment with integrating a knowledge database into the chatbot’s response generation.

Facebook Blender evaluation
Human evaluators compared multi-turn conversations with different chatbots.
FACEBOOK

Another major challenge with any open-ended chatbot system is to prevent it from saying toxic or biased things. Because such systems are ultimately trained on social media, they can end up regurgitating the vitriol of the internet. (This infamously happened to Microsoft’s chatbot Tay in 2016.) The team tried to address this issue by asking crowdworkers to filter out harmful language from the three data sets that it used for fine-tuning, but it did not do the same for the Reddit data set because of its size. (Anyone who has spent much time on Reddit will know why that could be problematic.)

The team hopes to experiment with better safety mechanisms, including a toxic-language classifier that could double-check the chatbot’s response. The researchers admit, however, that this approach won’t be comprehensive. Sometimes a sentence like “Yes, that’s great” can seem fine, but within a sensitive context, such as in response to a racist comment, it can take on harmful meanings.

In the long term the Facebook AI team is also interested in developing more sophisticated conversational agents that can respond to visual cues as well as just words. One project is developing a system called Image Chat, for example, that can converse sensibly and with personality about the photos a user might send.

05:00

Doubling the intern class - and making it all virtual [The Cloudflare Blog]

Doubling the intern class - and making it all virtual
Doubling the intern class - and making it all virtual

Earlier this month, we announced our plans to relaunch our intern hiring and double our intern class this summer to support more students who may have lost their internships due to COVID-19. You can find that story here. We’ve had interns joining us over the last few summers - students were able to find their way to us by applying to full-time roles and sometimes through Twitter. But, it wasn’t until last summer, in 2019, when we officially had our first official Summer Internship Program. And this year, we are doubling down.

Why do we invest in interns?

We have found interns to be invaluable. Not only do they bring an electrifying new energy over the summer, but they also come with their curiosity to help solve problems, contribute to major projects, and bring refreshing perspectives to the company.

  1. Ship projects: Our interns are matched with a team and work on real and meaningful projects. They are expected to ramp up, contribute like other members of the team and ship by the end of their internship.
  2. Hire strong talent: The internship is the “ultimate interview” that allows us to better assess new grad talent. The 12 weeks they spend with us tell us how they work with the team, their curiosity, passion and interest in the company and mission, and overall ability to execute and ship.
  3. Increase brand awareness: Some of the best interns and new grads we’ve hired come from referrals from past interns. Students go back to school and will share their summer experience with their peers and classmates, and it can catch like wildfire. This will make long term hiring much easier.
  4. Help grow future talent: Companies of all sizes should hire interns to help grow a more diverse talent pool, otherwise the future talent would be shaped by companies like Google, Facebook, Microsoft and the like. The experience gained from working at a small or mid-sized startup versus a behemoth company is very different.

Our founding principles. What makes a great internship?

How do we make sure we’re prepared for interns? And what should companies and teams consider to ensure a great internship experience? It’s important for companies to be prepared to onboard interns so interns have a great and fruitful experience. These are general items to consider:

  1. Committed manager and/or mentor: Interns need a lot of support especially in the beginning, and it’s essential to have a manager or mentor who is willing to commit 30+% of their time to train, teach, and guide the intern for the entire duration of the summer. I would even advise managers/mentors to plan their summer vacations accordingly and if they’re not there for a week or more, they should have a backup support plan.
  2. Defined projects and goals: We ask managers to work with their interns to clearly  identify projects and goals they would be interested in working on either before the internship starts, or within the first 2 weeks. By the end of the internship, we want each intern to have learned a lot, be proud of the work they’ve accomplished and present their work to executives and the whole company.
  3. Open environment and networking: Throughout the internship, we intentionally create opportunities to meet more people and allow a safe environment for them to ask questions and be curious. Interns connect with each other, employees across other teams, and executives through our Buddy Program, Executive Round Tables, and other social events and outings.
  4. Visibility and exposure: Near the end of the internship, all interns are encouraged and given the opportunity to present their work to the whole company and share their project or experience on the company blog. Because they are an integral part of the team, many times they’ll join meetings with our leaders and executives.

The pivot to virtual: what we changed

The above are general goals and best practices for an internship during normal times. These are far from normal times. Like many companies, we were faced with the daunting question of what to do with our internship program when it was apparent that all or most of it would be virtual. We leaned into that challenge and developed a plan to build a virtual internship program that still embodies the principles we mentioned and ensures a robust internship experience.

The general mantra will be to over-communicate and make sure interns are included in all the team’s activities, communications, meetings, etc. Not only will it be important to include interns in this, it's even more important because these members of our team will crave it the most. They'll lack the historical context existing employees share, and also won't have the breadth of general work experience that their team has. This is where mentors and managers will have to find ways to go above and beyond. Here are some tips below.

Onboarding

Interns will need to onboard in a completely remote environment, which may be new to both the manager and the company. If possible, check in with the interns before their first day to start building that relationship - understand what their remote work environment is like, how’s their mental health during COVID-19, are they excited and prepared to start? Also, keep in mind that the first two weeks are critical to set expectations for goals and deliverables, to connect them with the right folks involved in their project, and allow them to ask all the questions and get comfortable with the team.

Logistically, this may involve a laptop being mailed to them, or other accommodations for remote work. Verify that the intern has been onboarded correctly with access to necessary tools. Make a checklist. Some ideas to start with:

  1. Can they send/receive email on your company’s email address?
  2. Do you have their phone number if all else fails? And vice-versa?
  3. Do they have access to your team's wiki space? Jira? Chat rooms?
  4. Can they join a Google Meet/Zoom meeting with you and the team? Including working camera and microphone?
  5. Can they access Google Calendar and have they been invited to team meetings? Do they know the etiquette for meetings (to accept and decline) and how to set up meetings with others?
  6. Have they completed the expected onboarding training provided by the company?
  7. Do they have access to the role-specific tools they'll need to do their job? Source control, CI, Salesforce, Zendesk, etc. (make a checklist of these for your team!)

Cadence of Work

It's critical to establish a normal work cadence, and that can be particularly challenging if someone starts off fully remote. For some interns, this may be their first time working in a professional environment and may need more guidance. Some suggestions for getting that established:

  1. Hold an explicit kickoff meeting between the intern and mentor in which they review the project/goals, and discuss how the team will work and interact (meeting frequency, chat room communication, etc).
  2. If an intern is located in a different timezone, establish what would be normal working hours and how the team will update them if they miss certain meetings.
  3. Ensure there's a proper introduction to the team. This could be a dedicated 1:1 for each member, or a block of the team's regular meeting to introduce the candidate to the team and vice-versa. Set up a social lunch or hour during the first week to have more casual conversations.
  4. Schedule weekly 1:1s and checkpoint meetings for the duration of the internship.
  5. Set up a very short-term goal that can be achieved quickly so the intern can get a sense for the end-to-end. Similar to how you might learn a new card game by "playing a few hands for fun" - the best way to learn is to dive right in.
  6. Consider having the mentor do an end-of-day check-in with the intern every day for at least the first week or two.
  7. Schedule at least one dedicated midpoint meeting to provide feedback. This is a time to evaluate how they’re progressing against their goals and deliverables and if they’re meeting their internship expectations. If they are, great. If not, it is essential at this point to inform them so they can improve.

Social Activities

A major part of a great internship also involves social activities and networking opportunities for interns to connect with different people. This becomes more difficult and requires ever more creativity to try to create those experiences. Here are some ideas:

  1. Hold weekly virtual intern lunches and if there’s budget, offer a food delivery gift card. Have themed lunches.
  2. Think about virtual social games, Netflix parties, and possibly other apps that can augment virtual networking experiences.
  3. Set up social hours for smaller groups of interns to connect and rotate. Have interns meet with interns from their original office locations, from the same departments,
  4. Set up an intern group chat and have a topic, joke, picture, meme of the day to the conversations alive.
  5. Create a constant “water cooler” Google Meet/Zoom room so folks can sign on anytime and see who is on.
  6. Host virtual conversations or round tables with executives and senior leaders.
  7. Involve them in other company activities, especially Employee Resource Groups (ERGs).
  8. Pair them with a buddy who is an employee from a different team or function. Also, pair them up with a peer intern buddy so they can share their experience.
  9. Send all the swag love you can so they can deck out their house and wardrobe. Maybe not all at once, so they can get some surprises.
  10. Find a way to highlight interns during regular all-hands meetings or other company events, so people are reminded they’re here.
  11. Survey the students and get their ideas! Very likely - they have better ideas on how to socialize in this virtual world.

Interns in the past have proven to be invaluable and have made huge contributions to Cloudflare. So, we are excited that we are able to double the program to give more students meaningful work this summer. Despite these very odd and not-so-normal times, we are committed to providing them the best experience possible and making it memorable.

We hope that by sharing our approach we can help other companies make the pivot to remote internships more easily. If you’re interested in collaborating and sharing ideas, please contact internships@cloudflare.com.

Doubling the intern class - and making it all virtual
Doubling the intern class - and making it all virtual
Doubling the intern class - and making it all virtual
Doubling the intern class - and making it all virtual

Tuesday, 28 April

23:00

Sailfish OS Rokua is now available [Jolla Blog]

Rokua forms part of Finland’s first UNESCO Geopark. In Rokua it is easy to see traces of the Ice Age. The park’s many esker ridges and wooded sandhills are blanketed with silvery lichens. Scattered through the park are many kettle hole lakes nestling in sandy hollows.

It has been almost a year since my previous blogpost aimed at a more tech savvy audience. With Sailfish OS Rokua it felt again like a good opportunity for such a blog post. The changes to the Sailfish OS user experience are available at the end of the document, if you want to skip the technical topics.

There are a lot of things that are not visible for a casual Sailfish OS user. This 3.3.0 release contains a vast number of updates for the lower level of the stack. We’ve included for example the updated toolchain, a new version of Python and many updates to core libraries such as glib2. In this blog I will go through a few of the changes and what they mean in practice for users, developers and Sailfish OS in general.

It is not just about updating one component – “Distribution jenga”

As many of you know operating systems consist of hundreds of components. These components are connected to each other at either compile time, link time or run time. When we conduct low level updates, as we have in this release, the changes for one component multiply and we end up updating tens of components because of their dependencies.

One such case was the update of the gobject-introspection package to version 1.63.2. The librsvg library started to fail during the process. This librsvg failure looked like an issue with vala. We decided to update vala also to reduce future maintenance work. This required the autoconf-archive package that we haven’t previously provided. Packaging the latest autoconf-archive then conflicted with gnome-common, which needed a small modification to make it compatible with autoconf-archive. After all the above, we finally got autoconf-archive installed and got back to vala and got all the pieces compiled together.

After compiling these changes together we had to integrate everything in one go to prevent the development branch from breaking. This was just one example of the many changes we have provided with this release. With the toolchain it was much more time consuming to actually get all the build failures fixed.

Binary compatibility with the Toolchain update

The most difficult, and at the same time one of the most anticipated updates by the whole Sailfish OS development community is the update of the toolchain. This includes an update of GCC from version 4.9.4 to version 8.3. We did not update to version 9.x or 10.x, as the work was started when the latest release by linaro/ARM was 8.3. We wanted to finalize this version before taking the next step. As mentioned also in the Hossa blog post, it is better to take smaller steps when updating complex components. While we did not get the latest and greatest, the changes are still extensive. GCC 4.9.x series was released in 2014 while GCC 8.3 is from Feb 2019. Even though the change is significant, we managed to preserve binary compatibility. All the binaries and applications compiled with the old toolchain should work just as they did before.

New code optimizations and support for the more recent C++ standard are a couple of the possibilities we gained with the update. The GCC update rebuilt the whole OS multiple times because of circular dependencies in the code, as expected. This process revealed dozens of packages that needed to be fixed. Some of the fixes were trivial, such as bluetooth-rfkill and buteo-mtp. At times we had to just take into use a fix that had already been available for the previous toolchain. This was the case for example with gst-plugins-base. There were tens of similar PR’s that had to be accomplished all over the stack to get everything built.

Some of the problems do not become visible at compile or linking time, making them very hard to notice. For example, we ran into problems with the old perl. We considered first to update perl to a newer version, but decided to develop a small patch instead. The rationale for this decision was to reduce risk for the release, as the amount of changes all combined was already considerable. In addition it should be noted that perl by default is not installed on the devices, it is in the stack because it is needed for the builds. Nevertheless we’ll need to look into updating perl later.

All things considered the toolchain update is a major step forward and with this change we will have development opportunities which we do not even know of yet. We invite you to comment and collaborate if you think of new ways or have additional ideas about how these changes will benefits us all.

Python 2 support ended

Python 2 support ended on 1st of January 2020. Python by default is not installed on Sailfish OS devices, it is used in our build environment. It also provides us the pyotherside bindings to Qt which allow developers to create Qt based applications using python. In this release python3 was updated to version 3.8.1 and python2 to the latest version 2.7.17. Having two Python versions in the stack means increased maintenance, and thus we have decided to start deprecating Python 2 and will focus exclusively on Python 3 in future.

Removing Python 2 may cause extra work for our development community, as some may still be using it. Despite this, the decision to remove Python 2 and concentrate our efforts on upgrading the stack is necessary and evident. Python 2 packages will remain in our repositories with this release and partly also on the next release as well, as removing the dependencies will take time. Nevertheless please consider moving all your code to Python 3 as soon as possible.

As a side effect of this work we were able to improve our build time, for example for dsme. Many of the dependencies on Python were not really needed, so removing them reduces the need for rebuilds.

QEMU

QEMU is an important part of our toolchain, used to compile our ARM and aarch64 binaries with x86 based machines. Over time we have experienced some problems with QEMU and it has become evident that we needed to update it to a newer version. Even though the release includes version 4.2.0 (update from the old 2.x branch), internally we conducted the upgrade in two steps, first to 4.0 and then to 4.2 release.

The change required a notable amount of work and resulted in no visible improvements for Sailfish OS end-users as such. However, developers will now be able to enjoy the new capabilities.

Library updates

Some of the updates would not have been possible with the previous version of the toolchain, as was the case with glibc which we’ve now updated to version 2.30 from the previous version 2.28.

We have also worked on different system components, such as expat, file, e2fsprogs, libgrypt, libsoup, augeas, wpa_supplicant, fribidi, glib2, nss and nspr as part of our normal maintenance work.

Included are also updates to lower level components that improve the user experience. The updated Gstreamer 1.16.1 offers better support for selected video and audio codecs. We also switched gstreamer to use ffmpeg for all SW codecs on the devices.

Technical debt installment

As part of our move towards a more maintainable system we have also been switching to use busybox more widely. In this release we moved coreutils, tar and vi to busybox. An additional benefit of using busybox is that it reduces the memory footprint of our image. With the coreutils replacement we saved ~4.2MB and with tar ~1.4MB from all device images. The vim-minimal replacement saved ~1.6MB of space from images with developer mode.

As with any platform there are times when one needs to look back a bit in order to consider how to proceed in the future. We have had our fair share of issues with statefs and we have come to the conclusion that it is not worth maintaining anymore. As such we will deprecate statefs after the 3.3.0 release. Instead of using statefs we will be moving to our other APIs. For example in future status information will be moving to libqofono that is already available in the stack. Other examples that used to require statefs are maliit and the browser.

We also started to deprecate qtaround. Qtaround is a small helper function library that is not used anymore and thus maintaining it in the stack does not make sense. We also removed other repositories and packages that are no longer used, such as cutes-js, cutes-qt5, meego-lsb, and libtalloc to name a few.

Sandboxing system services

There was also work done to further limit access to system services, which was mostly achieved using the systemd sandboxing feature. Surely this is just a small step, and we have the older systemd currently in use which does not include all of the latest features, but still provides a clear path forward for limiting our attack surface. Examples of how it was done can be seen in the mce and sensorfw repositories.

While systemd sandboxing is a small thing and currently only used by the system services, we have also been looking to provide similar capabilities for applications as well. There are no updates on the matter within this release, but there is already firejail packaging available for those who want to do early experiments. Whether we will make it part of the official API remains to be seen, and any feedback would again be welcome at together.

Changes for upcoming features

As mentioned in our earlier blog post we are working on providing multi-user functionality. This is something that has been requested by our partners. Access for different users on the same device is something that’s needed particularly in corporate environments where, for example, devices may be mounted in cars. Some lower-level enablers are already included in this 3.3.0 release.

We also noted that the community has been working on FlatPak. To help the community effort we merged libseccomp and json-glib into the Sailfish OS. We undertook internal research regarding FlatPak with our partners, and While FlatPak seems nice, the conclusion was that we do not see FlatPak as the selected Sailfish OS application bundling framework, mainly due to its high resource usage. Application sandboxing techniques need further research and we’re still looking in to the right approach.

Visible changes

Sailfish OS 3.3 is a major release including also visible changes. Here is a recap of some of them.

Weather icons

The new icon set is based on the current design language. We’ve highlighted key elements like the sun, moon, as well as rain so that they visually stand out from the symbol. Hence the sun will look ‘sunny’ on dark as well as on light Ambiences.

Weather icons

EAP-TLS Support

In this release support for connecting to WPA-EAP(TTLS) and WPA-EAP(TLS) networks with certificates has been added.
WPA-EAP certificates support

Global Address List (GAL) support

For all Exchange Active Sync users, you will now find support for searching contacts from the Global Address List (GAL) when adding recipients to an email. This support will be extended further in the future.
Global Address list support

Nextcloud account

Nextcloud accounts can now be added directly. The support includes the most comprehensive collection of features available so far with any integrated account, including backups, contacts, calendar, images and notifications.
Extended Nextcloud support

Location stack

For some time we have been offering Mozilla Location Services for our community. As explained on their blog, Mozilla will unfortunately be ending support for this. This is visible in positioning performance for the community releases. While our commercial partners have their own solutions for assisted location providers, we do not have an alternative for our community at this point in time.

We have identified a few fixes in our location stack that improved the performance and we are looking for more. We are also checking alternative services we could take in to use.

So quite a lot of things happened and more stuff to come, stay tuned 🙂

To celebrate the new release we offer Sailfish X with special price. You can get the offer by entering voucher code VAPPU when checking out from Jolla Shop. The offer is valid only for a limited time.

Br,
Sage

The post Sailfish OS Rokua is now available appeared first on Jolla Blog.

10:37

Saturday Morning Breakfast Cereal - Blank [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Bonus points if you call people idiots when the stats are on your side and elitists when they aren't.


Today's News:

08:54

Featured Session: Restarting the Global Economy [MIT Technology Review]

Economies are in turmoil. Get the answers to the questions on everyone’s mind:

  • What will it take to stabilize and boost regional, national, and global economies?
  • What can business leaders do to prepare for growth?
  • In this nearly worldwide pause, how can organizations reset, rethink, and innovate the way their business is done in order to become more resilient in the face of future threats?

In this session, you’ll hear from Linda Yueh, a fellow at Oxford University, on how we’ll gradually reopen and rebuild a global economy and what business leaders should do to prepare.

Register today and we’ll see you virtually in June.

08:49

Hear from the CEOs of Slack and Zoom [MIT Technology Review]

Technology is changing the nature of work on every level of business, from workforce talent to digital implementation and automation. What technologies are having the most significant impact? How do we make smart, practical decisions that enhance and embrace the technologies redefining the way we work today?  

EmTech Next concludes with Eric Yuan, founder and CEO of Zoom, and Stewart Butterfield, cofounder and CEO of Slack, who discuss what it takes to build the work tools of the future. You’ll leave with concrete ideas on how your organization can better integrate remote tools to optimize your workflow and efficiency, from anywhere in the world.

Put the collective power of MIT Technology Review and Harvard Business Review to work for your organization. Register now and join us virtually on June 8-10.

08:41

The business of emerging technologies [MIT Technology Review]

A continuous stream of emerging technologies is radically transforming business, disrupting the technological status quo, and reinventing the way people work. On day two of EmTech Next, we’ll delve into the state of technology today and what leaders need to know now in order to prosper and thrive.

  • Transforming 5G Communications. 5G is unlocking the potential of advanced communications, enabling smart machines, AI, and people to interconnect on a vastly deeper and more effective level. Where is the baseline for 5G? What is its true potential, and what is just industry hype? And how is 5G disrupting business technology today?
  • Smart Manufacturing and the Power of Data. Industry 4.0 has taken hold; smart manufacturing is a reality. Now what? How thoroughly has emerging technology disrupted the manufacturing business, and how can decision makers make better use of massive data, smart innovations, increased computing power, and the potential 5G windfall?

Attend these essential sessions and more at EmTech Next. Purchase your ticket today.

08:32

Navigating change as a leader [MIT Technology Review]

Times of crisis require leadership and strategy to navigate the path forward. Day one of EmTech Next digs into topics including:

  • Innovation and Leadership in a Time of Crisis. If innovation is the fuel that drives business, then what is the formula for innovation? In this segment, we will explore how smart leaders develop, adopt, and fund the efforts to foster disruptive technologies.
  • Creating a Cyber-Resilient Organization. Examining everything from personal data breaches to corporate espionage and ransomware, experts from Booz Allen, Mastercard, and the City of New York discuss the steps organizations must take to defend their data and the processes to follow when defenses have been breached.
  • EmTech Spotlight on the Future. We take a deep dive into today’s issues that directly impact emerging technologies, leadership decisions, and business strategies for the future.

Don’t miss these important discussions. Purchase your ticket today.

08:23

Technology changes everything [MIT Technology Review]

We are living in a changed world.

Technology and what it means to be digitally resilient are driving the nature of work on every level of the organization as never before. Your key to success as a leader will be making smart, practical decisions about enhancing the technology you use today and embracing the technology poised to restart business.

Join us June 8-10 for EmTech Next, MIT Technology Review’s 3-day virtual conference hosted in partnership with Harvard Business Review. We’re proud to offer a thought-provoking analysis of the technologies and forces fueling business transformation. Topics include: 

  • How resilient leaders respond to change, and how they prepare for the next unknown
  • Upskilling your existing workforce to keep current with new technology
  • Opportunities and issues that arise when 5G-connected devices impact every aspect of our lives
  • Advances in manufacturing productivity that AI will make possible
  • Keeping your business agile and focused through times of unprecedented innovation and change

Don’t miss this must-attend virtual conference. Register now.

08:00

Upgrading Fedora 31 to Fedora 32 [Fedora Magazine]

Fedora 32 is available now. You’ll likely want to upgrade your system to get the latest features available in Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 31 to Fedora 32.

Before upgrading, visit the wiki page of common Fedora 32 bugs to see if there’s an issue that might affect your upgrade. Although the Fedora community tries to ensure upgrades work well, there’s no way to guarantee this for every combination of hardware and software that users might have.

Upgrading Fedora 31 Workstation to Fedora 32

Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the GNOME Software app. Or you can choose Software from GNOME Shell.

Choose the Updates tab in GNOME Software and you should see a screen informing you that Fedora 32 is Now Available.

If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.

Choose Download to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.

Using the command line

If you’ve upgraded from past Fedora releases, you are likely familiar with the dnf upgrade plugin. This method is the recommended and supported way to upgrade from Fedora 31 to Fedora 32. Using this plugin will make your upgrade to Fedora 32 simple and easy.

1. Update software and back up your system

Before you do start the upgrade process, make sure you have the latest software for Fedora 31. This is particularly important if you have modular software installed; the latest versions of dnf and GNOME Software include improvements to the upgrade process for some modular streams. To update your software, use GNOME Software or enter the following command in a terminal.

sudo dnf upgrade --refresh

Additionally, make sure you back up your system before proceeding. For help with taking a backup, see the backup series on the Fedora Magazine.

2. Install the DNF plugin

Next, open a terminal and type the following command to install the plugin:

sudo dnf install dnf-plugin-system-upgrade

3. Start the update with DNF

Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:

sudo dnf system-upgrade download --releasever=32

This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the ‐‐allowerasing flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.

4. Reboot and upgrade

Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:

sudo dnf system-upgrade reboot

Your system will restart after this. Many releases ago, the fedup tool would create a new option on the kernel selection / boot screen. With the dnf-plugin-system-upgrade package, your system reboots into the current kernel installed for Fedora 31; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.

Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 32 system.

Upgrading Fedora: Upgrade complete!

Resolving upgrade problems

On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the DNF system upgrade quick docs for more information on troubleshooting.

If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.

Fedora 32 is officially here! [Fedora Magazine]

It’s here! We’re proud to announce the release of Fedora 32. Thanks to the hard work of thousands of Fedora community members and contributors, we’re celebrating yet another on-time release.

If you just want to get to the bits without delay, head over to https://getfedora.org/ right now. For details, read on!

All of Fedora’s Flavors

Fedora Editions are targeted outputs geared toward specific “showcase” uses.

Fedora Workstation focuses on the desktop. In particular, it’s geared toward software developers who want a “just works” Linux operating system experience. This release features GNOME 3.36, which has plenty of great improvements as usual. My favorite is the new lock screen!

Fedora Server brings the latest in cutting-edge open source server software to systems administrators in an easy-to-deploy fashion. For edge computing use cases, Fedora IoT provides a strong foundation for IoT ecosystems.

Fedora CoreOS is an emerging Fedora Edition. It’s an automatically-updating, minimal operating system for running containerized workloads securely and at scale. It offers several update streams that can be followed for automatic updates that occur roughly every two weeks. Currently the next stream is based on Fedora 32, with the testing and stable streams to follow. You can find information about released artifacts that follow the next stream from the download page and information about how to use those artifacts in the Fedora CoreOS Documentation.

Of course, we produce more than just the editions. Fedora Spins and Labs target a variety of audiences and use cases, including the Fedora Astronomy Lab, which brings a complete open source toolchain to both amateur and professional astronomers, and desktop environments like KDE Plasma and Xfce. New in Fedora 32 is the Comp Neuro Lab, developed by our Neuroscience Special Interest Group to enable computational neuroscience.

And, don’t forget our alternate architectures: ARM AArch64, Power, and S390x. Of particular note, we have improved support for Pine64 devices, NVidia Jetson 64 bit platforms, and the Rockchip system-on-a-chip devices including the Rock960, RockPro64, and Rock64.

General improvements

No matter what variant of Fedora you use, you’re getting the latest the open source world has to offer. Following our “First” foundation, we’ve updated key programming language and system library packages, including GCC 10, Ruby 2.7, and Python 3.8. Of course, with Python 2 past end-of-life, we’ve removed most Python 2 packages from Fedora. A legacy python27 package is provided for developers and users who still need it. In Fedora Workstation, we’ve enabled the EarlyOOM service by default to improve the user experience in low-memory situations.

We’re excited for you to try out the new release! Go to https://getfedora.org/ and download it now. Or if you’re already running a Fedora operating system, follow the easy upgrade instructions. For more information on the new features in Fedora 32, see the release notes.

In the unlikely event of a problem….

If you run into a problem, check out the Fedora 32 Common Bugs page, and if you have questions, visit our Ask Fedora user-support platform.

Thank you everyone

Thanks to the thousands of people who contributed to the Fedora Project in this release cycle, and especially to those of you who worked extra hard to make this another on-time release during a pandemic. Fedora is a community, and it’s great to see how much we’ve supported each other. I invite you to join us in the Red Hat Summit Virtual Experience 28-29 April to learn more about Fedora and other communities.

Edited 1800 UTC on 28 April to add a link to the release notes.

What’s new in Fedora 32 Workstation [Fedora Magazine]

Fedora 32 Workstation is the latest release of our free, leading-edge operating system. You can download it from the official website here right now. There are several new and noteworthy changes in Fedora 32 Workstation. Read more details below.

GNOME 3.36

Fedora 32 Workstation includes the latest release of GNOME Desktop Environment for users of all types. GNOME 3.36 in Fedora 32 Workstation includes many updates and improvements, including:

Redesigned Lock Screen

The lock screen in Fedora 32 is a totally new experience. The new design removes the “window shade” metaphor used in previous releases, and focuses on ease and speed of use.

Unlock screen in Fedora 32

New Extensions Application

Fedora 32 features the new Extensions application, to easily manage your GNOME Extensions. In the past, extensions were installed, configured, and enabled using either the Software application and / or the Tweak Tool.

The new Extensions application in Fedora 32

Note that the Extensions application is not installed by default on Fedora 32. To either use the Software application to search and install, or use the following command in the terminal:

sudo dnf install gnome-extensions-app

Reorganized Settings

Eagle-eyed Fedora users will notice that the Settings application has been re-organized. The structure of the settings categories is a lot flatter, resulting in more settings being visible at a glance.

Additionally, the About category now has a more information about your system, including which windowing system you are running (e.g. Wayland)

The reorganized settings application in Fedora 32

Redesigned Notifications / Calendar popover

The Notifications / Calendar popover — toggled by clicking on the Date and Time at the top of your desktop — has had numerous small style tweaks. Additionally, the popover now has a Do Not Disturb switch to quickly disable all notifications. This quick access is useful when presenting your screen, and not wanting your personal notifications appearing.

The new Notification / Calendar popover in Fedora 32

Redesigned Clocks Application

The Clocks application is totally redesigned in Fedora 32. It features a design that works better on smaller windows.

The Clocks application in Fedora 32

GNOME 3.36 also provides many additional features and enhancements. Check out the GNOME 3.36 Release Notes for further information


Improved Out of Memory handling

Previously, if a system encountered a low-memory situation, it may have encountered heavy swap usage (aka swap thrashing)– sometimes resulting in the Workstation UI slowing down, or becoming unresponsive for periods of time. Fedora 32 Workstation now ships and enables EarlyOOM by default. EarlyOOM enables users to more quickly recover and regain control over their system in low-memory situations with heavy swap usage. 

Ubuntu 20.04 Gaming Performance Across Desktops, X.Org vs. Wayland [Phoronix]

Last month we provided some early benchmarks looking at the Ubuntu 20.04 X.Org vs. Wayland gaming performance under GNOME 3.36, but now that Ubuntu 20.04 LTS has been officially released, here is a look at the AMD Radeon Linux gaming performance across a wide variety of desktops on both X.Org and Wayland where supported.

07:00

How AI is changing the customer experience [MIT Technology Review]

AI is rapidly transforming the way that companies interact with their customers. MIT Technology Review Insights’ survey of 1,004 business leaders, “The global AI agenda,” found that customer service is the most active department for AI deployment today. By 2022, it will remain the leading area of AI use in companies (say 73% of respondents), followed by sales and marketing (59%), a part of the business that just a third of surveyed executives had tapped into as of 2019.

Intimacy and efficiency

In recent years, companies have invested in customer service AI primarily to improve efficiency, by decreasing call processing and complaint resolution times. Organizations known as leaders in the customer experience field have also looked toward AI to increase intimacy—to bring a deeper level of customer understanding, drive customization, and create personalized journeys.

Genesys, a software company with solutions for contact centers, voice, chat, and messaging, works with thousands of organizations all over the world. The goal across each one of these 70 billion annual interactions, says CEO Tony Bates, is to “delight someone in the moment and create an end-to-end experience that makes all of us as individuals feel unique.”

Experience is the ultimate differentiator, he says, and one that is leveling the playing field between larger, traditional businesses and new, tech-driven market entrants—product, pricing, and branding levers are ineffective without an experience that feels truly personalized. “Every time I interact with a business, I should feel better after that interaction than I felt before.”

In sales and marketing processes, part of the personalization involves “predictive engagement”—knowing when and how to interact with the customer. This depends on who the customer is, what stage of the buying cycle they are at, what they are buying, and their personal preferences for communication. It also requires intelligence in understanding where the customer is getting stuck and helping them navigate those points.

Marketing segmentation models of the past will be subject to increasing crossover, as older generations become more digitally skilled. “The idea that you can create personas, and then use them to target or serve someone, is over in my opinion,” says Bates. “The best place to learn about someone is at the business’s front door [website or call center] and not at the backdoor, like a CRM or database.”

The survey data shows that for industries with large customer bases such as travel and hospitality, consumer goods and retail, and IT and telecommunications, customer care and personalization of products and services are among the most important AI use cases. In the travel and hospitality sector, nearly two-thirds of respondents cite customer care as the leading application.

The goal of a personalized approach should be to deliver a service that empathizes with the customer. For customer service organizations measured on efficiency metrics, a change in mindset will be required—some customers consider a 30-minute phone conversation as a truly great experience. “But on the flip side, I should be able to use AI to offset that with quick transactions or even use conversational AI and bots to work on the efficiency side,” says Bates.

Building connectivity across data sets

With vast transaction data sets available, Genesys is exploring how they could be used to improve experiences in the future. “We do think that there is a need to share information across these large data sets,” says Bates. “If we can do this in an anonymized way, in a safe and secure way, we can continue to make much more personalized experiences.” This would allow companies to join different parts of a customer journey together to create more interconnected experiences.

This isn’t a straightforward transition for most organizations, as the majority of businesses are structured in silos—“they haven’t even been sharing the data they do have,” he adds. Another requirement is for technology vendors to work more closely together, enabling their enterprise customers to deliver great experiences. To help build this connectivity, Genesys is part of industry alliances like CIM (Cloud Innovation Model), with tech leaders Amazon Web Services and Salesforce. CIM aims to provide common standards and source code to make it easier for organizations to connect data across multiple cloud platforms and disparate systems, connecting technologies such as point-of-sale systems, digital marketing platforms, contact centers, CRM systems, and more.

A future of data sharing?

Data sharing has the potential to unlock new value for many industries. In the public sector, the concept of “open data” is well known. Publicly available data sets on transport, jobs and the economy, security, and health, among many others, allow developers to create new tools and services, thus solving community problems. In the private sector there are also emerging examples of data sharing, such as logistics partners sharing data to increase supply chain visibility, telecommunications companies sharing data with banks in cases of suspected fraud, and pharmaceutical companies sharing drug research data that they can each use to train AI algorithms.

In the future, companies might also consider sharing data with organizations in their own or adjacent industries, if it were to lead to supply chain efficiencies, improved product development, or enhanced customer experiences, according to the MIT Technology Review Insights survey. Of the 11 industries covered in the study, respondents from the consumer goods and retail sector proved the most enthusiastic about data sharing, with nearly a quarter describing themselves as “very willing” to share data, and a further 57% being “somewhat willing.”

Other industries can learn from financial services, says Bates, where regulators have given consumers greater control over their data to provide portability between banks, fintechs, and other players, in order to access a wider range of services. “I think the next big wave is that notion of a digital profile where you and I can control what we do and don’t want to share—I would be willing to share a little bit more if I got a much better experience.”

06:00

Creating a True One-Stop Solution for Companies to Go Global: Announcing a Partnership Between Cloudflare and JD Cloud & AI [The Cloudflare Blog]

Creating a True One-Stop Solution for Companies to Go Global: Announcing a Partnership Between Cloudflare and JD Cloud & AI
Creating a True One-Stop Solution for Companies to Go Global: Announcing a Partnership Between Cloudflare and JD Cloud & AI

It’s well known that global companies can face challenges doing business in and out of China due to the country’s unique rules, regulations, and norms, not to mention recent political and trade complications. Less well known is that China’s logistical and technical network infrastructure is also quite different from the rest of the world’s. With global Internet traffic up 30% over the past month due to the pandemic, these logistical and technical hurdles are increasing the burden for global businesses at exactly the wrong time. It’s now not unusual for someone based in China to have to wait extended periods and often be unable to access applications hosted elsewhere, or vice-versa, due to the lower performance of international Internet traffic to and from China. This affects global companies with customers, suppliers or employees in China, and Chinese companies who are trying to reach global users.

Our mission is to help build a better Internet, for everyone, everywhere. So, today we’re excited to announce a significant strategic partnership with JD Cloud & AI, the cloud and intelligent technology business unit of Chinese Internet giant JD.com. Through this partnership, we’ll be adding 150 data centers in mainland China, an increase in the region of over 700%. The partnership will also enable JD to provide a Cloudflare-powered service to China-based customers. As a result, it will create a one-stop solution for companies both inside and outside of China to go truly global.

Cloudflare’s Long Experience in China

Cloudflare has helped our global customers deliver a secure, fast, and reliable Internet experience for China-based visitors since 2015 and we’ve served Chinese customers since our inception. Cloudflare customers currently are able to extend their configurations with the click of a button across data centers in 17 cities in mainland China. As a result, they’re able to deliver their content faster, more securely, and reliably in-country. The demand for the service has been overwhelming, and we’ve been exploring ways to provide our customers with a network that would have an order of magnitude greater coverage.

China’s Balkanized Network Architecture

What we’ve learned from our experience is that having a widely distributed network and world class partners in China matters more there than almost anywhere else in the world. To understand why, it’s important to understand the specific technical and logistical hurdles that exist there.

China has a non-uniform technical and network infrastructure, directly impacting Internet performance. Mainland China has three major telecom carriers—China Telecom, China Unicom, and China Mobile—serving 22 provinces, 4 municipalities, and 5 autonomous regions. In many of these places, each carrier operates a distinct network and in some provinces more than one network, that in many cases, operate independently of one another. The result is many different sub-networks that need to be coordinated.

Regulatory hurdles in the network space can also present challenges. Unlike the rest of the world, where Anycast routing is generally available, in China the three main ISPs control IP address allocation and routing for customers’ networks both inside the country and globally. Small or large companies rarely own their own IP address allocations, and even fewer use BGP to control Internet routing. Because of the lack of BGP and the static allocation of IPs, the carriers’ customers operate on IP addresses that are homed onto a single network’s backbone.

The combination of this single-homed IP connectivity and the fragmented network topography leads to frequent bottlenecks between the various domestic ISPs. This makes network coverage all the more important. Add in a rapidly expanding economy with growing Internet activity, and extraordinary times such as these which puts even more strain on the Internet, and it's easy to see why situations regularly occur where too much traffic is paired with too little capacity.

The Challenge of Putting Boots on the Ground

Compounding these hurdles further is that, from a business and logistics perspective, China is similarly a collection of sub-markets. There are huge variations between provinces in terms of population levels, average income, consumer spending, and the like. Regional business regulations also vary dramatically. Although it is slowly opening up to outside competition, the Chinese transportation and logistics market is one of the most highly regulated in the world. Regulation exists at a number of different tiers, imposed by national, regional, and local authorities. Finally, there are shortages of high-quality logistics facilities and warehousing spaces, making it hard to find domestic providers for managing import, export, and local transportation as well as trade compliance. You often have to hire consultants who specialize in the China market to assess quality, trustworthiness, and other factors.

This makes it challenging both for foreign companies seeking a fast, secure, and reliable Internet experience but also, as we often hear from our customers, to navigate China more generally.

The Importance of a World Class Local Partner

Given these technical, logistical, and regulatory complexities, it’s very difficult for foreign companies to navigate the China landscape without local expertise. Partnering with JD Cloud & AI provides not only local expertise, but also a relationship with one of the world’s largest logistics, e-commerce, and Internet companies, JD.com.

Creating a True One-Stop Solution for Companies to Go Global: Announcing a Partnership Between Cloudflare and JD Cloud & AI

JD.com is a juggernaut, operating at a scale that’s rare among global companies. It’s China’s largest retailer by revenue, online or offline, with one billion retail customers, a quarter billion registered users, seven million enterprise customers, and $83 billion in 2019 revenue. Its highly automated logistics system uses robots, AI, and fleets of drones to cover 99% of China’s population.

JD decided several years ago to open its technology platform to its enterprise customers and began offering cloud services through a new business unit called JD Cloud & AI. JD Cloud & AI has quickly become the fastest growing cloud company among the top five Chinese providers. It offers a full range of services across eight availability zones in China and has made security and compliance a key part of its offering. In line with its parent company, JD Cloud & AI has made serving a global audience a key part of its strategy and has partnered with the likes of Microsoft and Citrix to build on this strategy. Importantly, like Cloudflare, the company has continued to invest in its infrastructure through the current pandemic, and has been critical to keeping China’s supply chains flowing and its businesses functioning.

Taking International Companies Into China & Chinese Companies Global

Creating a True One-Stop Solution for Companies to Go Global: Announcing a Partnership Between Cloudflare and JD Cloud & AI

Our partnership with JD Cloud & AI will allow international businesses to grow their online presence in China without having to worry about managing separate tools with separate vendors for security and performance in China. Customers will benefit from greater performance and security inside China using the same configurations that they use with Cloudflare everywhere else in the world.

Using Cloudflare's international network outside of China, and JD Cloud & AI’s network inside of China, any enterprise can rapidly and securely deploy cloud-based firewall, WAN optimization, distributed denial of service (DDoS) mitigation, content delivery, DNS services, and Cloudflare Workers, our serverless computing solution, worldwide. All with the click of a button within Cloudflare’s dashboard and without deploying a single piece of hardware.

For those customers who need it, we also expect JD.com to be able to help with in-country logistics. JD operates over 700 warehouses that cover almost all the counties and districts in China. It has over 360 million active individual consumers and seven million enterprise customers that purchase products on its platform. For Cloudflare customers interested in reaching these Chinese end-customers, no matter where they are located in China, JD.com will be able to help.

The partnership with JD Cloud & AI will also allow us to help Chinese companies reach global audiences. JD Cloud & AI will use Cloudflare's international network outside of China, and the JD Cloud & AI network inside of China, to allow any China-based enterprise to use Cloudflare’s integrated performance and security services worldwide, all seamlessly controlled from within the JD Cloud & AI dashboard.

Data Management

As always, we’re taking care to be thoughtful about the treatment of customer data with this partnership. Cloudflare operates all services outside of China, and JD Cloud & AI all services inside of China. No Cloudflare customer traffic passes through the China network unless a customer explicitly opts-in to the service. And, for Cloudflare customers that opt-in to proxying content inside China, traffic and log data from outside of China is not stored in the China network or shared with our partner.

A One-Stop, Truly Global Solution

Creating a True One-Stop Solution for Companies to Go Global: Announcing a Partnership Between Cloudflare and JD Cloud & AI

We are excited about this new partnership which will help us continue to offer customers the best performance and security service available anywhere in the world — and as a one-stop solution. While we can’t control the trade and political climate, which will inevitably ebb and flow over time, we can help our customers with technical and logistical challenges they may face doing business around the world, especially in these challenging times.

New and existing Cloudflare customers can request to be served in China by filling out an information request at https://www.cloudflare.com/china.

04:00

Five things we need to do to make contact tracing really work [MIT Technology Review]

The ongoing pandemic is fertile ground for opportunistic hucksters, loud frauds, and coronavirus deniers who attack or blame everyone and everything from Chinese-Americans to Bill Gates to 5G networks. The latest front in this bizarre war: contact tracing.

Tracing is the technique public health workers use to identify carriers of an infectious disease and then uncover who else they may have exposed, in an effort to isolate those at risk and halt the illness’s spread. It’s a time-tested investigation method used to successfully fight outbreaks of diseases including measles, HIV, and Ebola. Countries around the world are already using it against covid-19 with great success, and now many US states are beginning to assemble their own covid tracing teams. At the same time, powerful technology companies including Apple and Google are building systems to help expand and automate tracing and notify people who might have been exposed. Yet contact tracing—like testing, social distancing, and isolation—is now caught in the political crossfire.

“That’s totally ridiculous,” Rudy Giuliani told Fox News host Laura Ingraham when asked about New York’s plan to hire an “army” of coronavirus tracers. “Then we should trace everybody for cancer and heart disease and obesity,” he mocked. “I mean, a lot of things kill you more than covid-19, so we should be traced for all those things.”

“Yeah,” Ingraham snorted, rolling her eyes. “An army of tracers.”

Virtually all medical professionals—and medical bodies from America’s Centers for Disease Control to the World Health Organization—emphatically say contact tracing is a crucial part of the three-pronged plan for returning the world to normal: test, trace, isolate.

“I don’t think we can overstate the importance of contact tracing,” says Seema Yasmin, director of the Stanford Health Communication Initiative and a former CDC investigator who focused on epidemics. “It’s been at the cornerstone of every major epidemic investigation from SARS to Ebola and beyond.”

While testing is the absolute top priority—you have to find the people who are infected, after all—tracing is vital for stopping a disease from spreading out of control. Once you have identified people at risk, you put those people in isolation before they can spread coronavirus further.

“Coronavirus has a weakness because the typical transmission time is fairly long—about a week,” says Microsoft computer scientist John Langford, who has been working with the state of Washington on its contact tracing efforts. “If you can trace this on smaller time scales, you can shut it off.”

But while Giuliani’s statement is painfully inane—obesity, for example, is not an infectious disease—the truth is that contact tracing efforts in this pandemic do face historic and very real challenges. Whether it’s done manually by teams of investigators or automated through phone alerts, tracing has never been done at the scale needed to fight covid-19. All the genuine problems and concerns around tracing will be exacerbated, and they will need to be addressed if these efforts are going to succeed.

Here are five things that need to happen to make contact tracing work in the US.

Task 1: Hire 100,000 manual tracers

Contact tracing is a crucial public health tool. Photo by engin akyurt on Unsplash

Once the country begins to reopen, but before there is a vaccine or effective treatment, the primary way of preventing the spread of covid-19 will be manual tracing. Trained medical workers get in touch with those who have received a diagnosis and collect data about their movements and contacts. A patient may have been in contact with 100 other people recently, which means 100 follow-ups by phone or in person to track down everyone at risk of exposure. Depending on the data and science, tracers may then request isolation and tests. It’s labor-intensive work.

“This is going to be a massive undertaking,” New York governor Andrew Cuomo said at a press conference last week. 

His state and particularly New York City—now the hardest-hit region in the world—showcases the difficulties that America has had with building manual tracing capacity. A metro area with a population exceeding 21 million people and over 16,100 covid-19 deaths has had fewer than 1,000 tracers in action so far (compared with 9,000 in Wuhan, China, a city of 11 million). 

American health departments have been chronically underfunded since the 2008 financial crisis, losing more than 55,000 workers, despite repeated warnings that this lack of resources puts lives at risk. In fact, there are currently only 2,200 contact tracers in the entire United States, according to the Association of State and Territorial Health Officials.

That is now changing. New York is working with New Jersey and Connecticut at a regional level and hopes to tap the talents of thousands of medical students, while Massachusetts has budgeted $44 million to hire 1,000 contact tracers. San Francisco was one of the first local governments in the nation to begin building its contact tracing team, which could ramp up to 150 people monitoring a city of 880,000; California governor Gavin Newsom has promised 10,000 more across the state. But all this is merely a start. A recent report from the Johns Hopkins Center for Health Security said as many as 100,000 workers may be needed to make manual contact tracing efforts effective across the country. And to get there, Congress needs to spend around $3.6 billion. Former CDC director Tom Frieden, who believes the cost could be even higher, recently said that the same approach is needed all around the country.

Task 2: Protect privacy

This is why automated tracing has become appealing. The concept, which uses technology like Bluetooth and GPS to automatically determine whether a person may have been exposed, has been put in the spotlight as authorities around the world try to cope with the astonishing rate of covid infections.

“We will remain vigilant to make sure any contact tracing app remains voluntary and decentralized.”

Jennifer Granick, ACLU

High-tech efforts—especially in Asian countries like China, Singapore, Taiwan, and South Korea—have generated many headlines, but when Apple and Google release their system for building exposure notifications into their own smartphones, it will be the most significant development globally. The two companies are responsible for the software on more than 99% of phones on the planet, and eight out of 10 Americans own a smartphone. Apps built directly into iOS and Android, especially if they are interoperable, could dramatically increase the reach of the public health authorities in one swoop.

But privacy advocates and civil liberties campaigners have valid concerns. Contact tracing is a form of surveillance that, in the worst case, can be abused by companies or governments. Medical surveillance has repeatedly proved to be a life-saving tool, however, and Apple and Google say they are making privacy a priority by building decentralized systems designed to make malicious surveillance difficult while also providing key data to public health authorities. This is all new, and success is highly reliant on the actions of governments themselves.

“To their credit, Apple and Google have announced an approach that appears to mitigate the worst privacy and centralization risks, but there is still room for improvement,” Jennifer Granick, the ACLU’s surveillance and cybersecurity counsel, said when Apple and Google announced their tracing technology. “We will remain vigilant moving forward to make sure any contact tracing app remains voluntary and decentralized, and used only for public health purposes and only for the duration of this pandemic.”

Task 3: Ensure that tracing covers as many people as possible

But those building automated services are keen to stress that they are not trying to replace manual tracing; they’re trying to aid it. They see digital tools as a way to complement and scale up the work done by human teams. For example, smartphone alerts can help filter out those at low or no risk so that manual tracers can spend their time investigating genuine cases, people at higher risk, or those who are harder to contact.

“Our philosophy is that contact tracing is essential to shutting down the epidemic and having an economy that works,” Microsoft’s Langford says. “With digital tools, we want to enhance the process of contact tracing. The primary thrust is manual contact tracing. But there are things we can do with a phone app which makes this work more effectively.”

But even if a tracing app were downloaded by everyone who could legitimately use it, a major challenge is the simple fact that not everyone has a smartphone. If eight out of 10 Americans own one, that means two out of 10 don’t. Highly vulnerable groups are frequently on the wrong side of that digital divide, says George Rutherford, a professor of epidemiology at the University of California, San Francisco. 

Only 42% of Americans above the age of 65, the same group that makes up eight out of 10 deaths from covid-19, own a smartphone, according to a 2017 Pew Research Center poll

In San Francisco, some of the biggest case clusters are among the city’s homeless and Latino populations, groups in which smartphone ownership rates also lag. And that is not the only issue. About 40% of the potentially exposed people that the city’s contact tracing effort has reached are monolingual Spanish speakers, many of whom live in crowded multigenerational households. At least one San Francisco health official has said that fear of immigration authorities is also discouraging this group from participating in the city’s manual tracing efforts. Apps that track their movements could be even less appealing.

“If you’re here or Texas or New York, places with large immigrant populations, with ICE breathing down everyone’s neck, the last thing they want is their information in a database,” Rutherford says.

This is an area where human tracers who can establish trust will be key.

“You need to make sure those people represent the communities that they are going into,” says Stanford’s Seema Yasmin. “It’s so important as you do contact tracing that it’s thorough—meaning that people trust you, that they do give you the information you really need to do it to its fullest extent. If people are already frightened about the way that immigrants are being treated during this crisis, because of the legislation and rhetoric, then you want to make sure that you send the right people into communities where there are lots of immigrants, whether they’re undocumented or not, to make sure that people feel like they can trust you and can be honest with you.”

There’s no getting away from the need for strong, careful manual tracing. Even in countries that have been noted for using high-tech tracing methods, the reality on the ground turns out to be very human. 

Task 4: Accept that technology alone cannot solve this problem

In Taiwan, fears of the virus were extremely high very early on in the outbreak. More than 850,000 Taiwanese citizens live in mainland China, and they routinely travel back and forth between the two countries. So far, however, there have been just 428 confirmed cases and six deaths from covid-19. Much media coverage focused on the Taiwanese government’s high-tech methods—for example, using cell-phone signals to track the location of people in quarantine and make sure they were staying inside.

But in fact, a mix of high- and low-tech measures has been key. The country closed its borders to foreign citizens arriving from China on February 7, and to all noncitizens on March 19. Even those who came back had to undergo a 14-day isolation at home.

Brandon Yu, a Taiwanese student at Brown University who returned to Taipei in March, says that every day during his quarantine he had to note his temperature on a piece of paper and respond to phone calls from his public health office. “They were super short,” he says. “‘How are you doing? Get some rest, I’ll call again later.’” On the first call they reminded him that his location was being tracked, and that even running down his phone’s battery—which stops it from pinging nearby cell-phone towers—could result in police or health officials knocking on his door.

People confirmed to have covid-19 are required to stay at a hospital until they recover (something that is possible only because Taiwan has so far kept its health system from being overwhelmed). Hao-yuan Cheng, a doctor from the Taiwanese CDC, says that while researchers can request patients’ cell-phone location data, it has not actually been very helpful: “Covid-19 spreads through close contacts—for example, within a household or in a classroom,” he says. “Usually, these are people that patients have spent a lot of time with and who they know personally.” To date, Taiwan has not employed any automated contact tracing apps.

It’s true that China leaned heavily on tech, partly aided by invasive and mandatory data-sharing enforced by the government. But Wuhan also had thousands of human contact tracers making calls to patients and contacts, compiling data, and tracking down others who were at risk, not to mention strict policies governing movement during lockdowns.

Meanwhile, in Singapore, where the TraceTogether app broke ground as the first government-backed automated tracing service in the world, only 10% to 20% of the country actually uses it. 

“People are confused about the way these areas work, thinking they are entirely automated,” says Sham Kakade, a research scientist at Microsoft who works on contact tracing. “There is automation, but there are humans in the loop in all of them. The humans are playing detective.”

All these countries have a toolbox of policies and approaches that help them investigate coronavirus cases. In Singapore, one of the lead developers of TraceTogether, Jason Bay, made his feelings on the subject extremely clear.

“If you ask me whether any Bluetooth contact-tracing system deployed or under development anywhere in the world is ready to replace manual contact tracing, I will say, without qualification, that the answer is, ‘No,’” Bay wrote. “Any attempt to believe otherwise is an exercise in hubris, and technology triumphalism. There are lives at stake. False positives and false negatives have real-life (and death) consequences. We use TraceTogether to supplement contact tracing—not replace it.”

Task 5. And do it all, now

Even if the likes of Rudy Giuliani scoff at the need for “armies” of tracers, and even if genuine concerns about adoption, accuracy, trust, and funding remain, every expert agrees: contact tracing is needed, it works, and doing it well will not be easy. 

More people may well be needed to make this happen. The Johns Hopkins estimate of 100,000 tracers nationwide may need to rise if the virus spreads further; even Google and Apple’s automated service will require many thousands of health workers to conduct verified testing and follow-ups.

Doing contact tracing well, at the volume required to tackle the disease, will require not just lots of people working on manual and automated efforts, but also plenty of money and a lot of coordination. A program of test-trace-isolate requires knowing where the coverage gaps are, who’s affected, what those people need, and what it takes to reach them. 

And for now? Don’t wait. 

George Rutherford at UCSF says his team will take a harder look at the potential of any emerging apps to reduce the workforce required a few weeks down the road. But for him, the focus now is on rapidly building the sort of traditional, human-powered contact tracing effort that has worked to contain outbreaks in the past.

“We’ve got to really understand exactly what we’re doing, who the people we’re dealing with are, what their concerns are, and what works best in identifying and isolating infections, and quarantining those who may have been exposed,” he says. “That’s the name of the game.”

—Additional reporting by James Temple and Katharin Tai

03:00

The US already has the technology to test millions of people a day [MIT Technology Review]

There is widespread agreement that the only way to safely reopen the economy is through a massive increase in testing. The US needs to test millions of people per day to effectively track and then contain the covid-19 pandemic.

This is a tall order. The country tested only around 210,000 people per day last week, and the pace is not increasing fast enough to get to millions quickly.

The urgency to do better is overwhelmingly bipartisan, with the most recent legislation adding $25 billion for testing a few days ago. Fears are growing, however, that testing might not scale in time to make a difference. As Senators Lamar Alexander and Roy Blunt wrote last week, “We have been talking with experts across the government and the private sector to find anyone who believes that current technology can produce the tens of millions of tests necessary to put this virus behind us. Unfortunately, we have yet to find anyone to do so.”

We believe that it can be done. The scientific community has the technological capabilities today to test everyone who needs it and enable people to come back to work safely.

To be clear—the senators are right that simply scaling up current practices for covid testing is insufficient. However, with a bit of innovation, the US can meet the need without inventing entirely new technologies. The necessary scale can be achieved by deploying the fruits of the last decade of innovation in biology, including the dizzying advances in DNA sequencing, genetic engineering, industrial automation, and advanced computation.

We speak from experience. We have worked with and helped engender many of these technologies across academia and industry. Scaling them for widespread testing will require investment, infrastructure, and determination, but nothing technologically or logistically infeasible.

Tests for mass screening may have different requirements and characteristics from the tests run in clinical labs today that are approved by the Food and Drug Administration. So what might a solution look like?

It must be scalable, meaning tens or hundreds of thousands of tests per day per facility, or at-home tests. It must be sensitive to early stages of infection, detecting the actual virus rather than immunity to it. And it must be less bound by health insurance and regulatory constraints, to allow fast and broad testing, contact tracing, and isolation. These differences do not mean lower standards. In fact, screening at this scale will require stringent requirements for safety, accuracy, and reliability.

The life sciences community is rising to the challenge. We are repurposing our labs to advance new centralized and at-home methods that solve the bottlenecks preventing testing from reaching global scale. This community is moving fast, with shared purpose and a commitment to open collaboration. As a result of these efforts, several promising avenues are emerging.

Some rely on DNA sequencing tools that have improved a million-fold since the completion of the Human Genome Project nearly 20 years ago. Not only can these tools now read trillions of base pairs of human DNA every day, but they can be readily repurposed to test for the presence of coronavirus at mass scale, using instruments that already exist across the country. Some methods, such as SHERLOCK and DETECTR, harness CRISPR DNA and RNA recognition tools to enable rapid, distributed testing in doctor’s offices and at other sites. Other efforts are removing critical bottlenecks, such as sample purification, to make the existing approaches more scalable.

There are additional possibilities, and the US needs to place bets on several of them at the same time. Some of those bets might fail, but the severity of the moment requires that we try. Chances are, we will need more than one of them.

As important as the diagnostic technology itself is the need to fuel innovation at all stages of the testing process, including sample collection, regulation, logistics, manufacturing, distribution, scale-up, data infrastructure, and billing. These are solvable problems. The solutions may sometimes differ from current clinical testing conventions, but these are not conventional times.

Maybe cotton swabs or saliva can be used for collection rather than traditional nasopharyngeal swabs, which are in critically short supply. Maybe mass screening tests don’t have to have the tested person’s name and date on every collection tube but could instead include a bar code that you snap a picture of with your phone. Maybe these tests can be self-administered at home or work rather than conducted by trained professionals in clinical settings. Maybe samples from low-risk, asymptomatic people can be pooled together for initial testing and further screened only in the event of a positive result. This would allow many more samples to be analyzed at once.

State or federal regulatory agencies could make these adjustments to conventional practices more easily if they were willing to treat mass screening for bringing people back to work differently from the testing used in clinical settings. In addition, mass screening efforts will require unconventional partnerships with private companies, nonprofits, universities, and government agencies to support the logistics, collection, manufacturing, scale-up, and data infrastructure to make such a system possible. All this can be done, and some of it is already starting to be done—but we must not lose hope.

The United States’ capabilities in the life sciences and information technology are unmatched in the world. The time is now to rapidly build a massively scaled screening program that will save lives while allowing us to reopen our economy and keep it open. This can be done, but it will require urgency and determination to make multiple, simultaneous bets on infrastructure, regulation, and technology, as well as collaboration to put it all together.

We have united before to face far greater challenges as a nation, and we can do so again.

Sri Kosuri is cofounder and CEO of Octant and an associate professor in the Department of Chemistry and Biochemistry at UCLA. Feng Zhang is the James and Patricia Poitras Professor of Neuroscience at MIT’s McGovern Institute, a core member of the Broad Institute, a Howard Hughes Medical Institute Investigator, and cofounder of Sherlock Biosciences. Jason Kelly is cofounder and CEO of Ginkgo Bioworks. Jay Shendure is a Howard Hughes Medical Institute Investigator at the University of Washington School of Medicine and scientific director of the Brotman Baty Institute.

Monday, 27 April

14:15

What if immunity to covid-19 doesn’t last? [MIT Technology Review]

Starting in the fall of 2016 and continuing into 2018, researchers at Columbia University in Manhattan began collecting nasal swabs from 191 children, teachers, and emergency workers, asking them to record when they sneezed or had sore throats. The point was to create a map of common respiratory viruses and their symptoms, and how long people who recovered stayed immune to each one.

The research included four coronaviruses, HKU1, NL63, OC42, and C229E, which circulate widely every year but don’t get much attention because they only cause common colds. But now that a new coronavirus in the same broad family, SARS-CoV-2, has the world on lockdown, information about the mild viruses is among our clues to how the pandemic might unfold.

What the Columbia researchers now describe in a preliminary report is cause for concern. They found that people frequently got reinfected with the same coronavirus, even in the same year, and sometimes more than once. Over a year and a half, a dozen of the volunteers tested positive two or three times for the same virus, in one case with just four weeks between positive results.

That’s a stark difference from the pattern with infections like measles or chicken pox, where people who recover can expect to be immune for life.

For the coronaviruses “immunity seems to wane quickly,” says Jeffrey Shaman, who carried out the research with Marta Galanti, a postdoctoral researcher.

Jeffrey Shaman, Columbia University
Jeffrey Shaman leads the Virome of Manhattan study at Columbia University, which found people are frequently re-infected by the same cold-causing germs. The research shows immunity to some coronaviruses is short lived.
MS TECH | AP PHOTO/MARY ALTAFFER

Whether covid-19 will follow the same pattern is unknown, but the Columbia results suggest one way that much of the public discussion about the pandemic could be misleading. There is talk of getting “past the peak” and “immunity passports” for those who’ve recovered. At the same time, some hope the infection is more widespread than generally known, and that only a tolerable death total stands between us and high enough levels of population immunity for the virus to stop spreading.

All that presumes immunity is long-lived, but what if it is fleeting instead?

“What I have been telling everyone—and no one believes me, but it’s true—is we get coronaviruses every winter even though we’re seroconverted,” says Matthew Frieman, who studies the virus family at the University of Maryland. That is, even though most people have previously developed antibodies to them, they get the viruses again. “We really don’t understand whether it is a change in the virus over time or antibodies that don’t protect from infection,” he says.

Critical factor

We’re currently in the pandemic phase. That’s when a new virus, which humans are entirely susceptible to, rockets around the planet. And humanity is still a greenfield for covid-19—as of April 26, there were about three million confirmed cases, or one in 2,500 people on the planet. (Even though the true number of infections is undoubtedly higher, it’s still probably only a small fraction of the population.) Takeshi Kasai, the World Health Organization’s regional director for the Western Pacific, recently warned that until a vaccine is available, the world should get ready for a “new way of living.”

Further out, though, changes like social distancing or grounding airline flights may not be the biggest factor in our fate. Whether or not people acquire immunity to the virus, and for how long, will be what finally determines the toll of the disease, some researchers say.

Early evidence points to at least temporary protection against reinfection. Since the first cases were described in China in December, there has been no cut-and-dried case of someone being infected twice. While some people, including in South Korea, have tested positive a second time, that could be due to testing errors or persistence of the virus in their bodies.

“There are a lot of people who were infected and survived, and they are walking around, and they don’t seem to be getting reinfected or infecting other people,” says Mark Davis, a researcher at Stanford University. As of April 26, more than 800,000 people had officially recovered from the disease, according to the Johns Hopkins case-tracking dashboard.

Researchers in China also tested directly whether macaque monkeys resisted a second exposure to the new coronavirus. They infected the monkeys with the virus, and then four weeks later, after they recovered, tried again. The second time, the monkeys didn’t develop symptoms, and researchers couldn’t find any virus in their throats.

What’s unknown is how long immunity lasts—and only five months into the outbreak, there is no way to know. If it’s for life, then every survivor will add to a permanent bulwark against the pathogen’s spread. But if immunity is short, as it is for the common coronaviruses, covid-19 could set itself up as a seasonal superflu with a high fatality rate—one that emerges in a nasty wave winter after winter.

The latest computer models of the pandemic find that the duration of immunity will be a key factor, and maybe the critical one. One model, from Harvard University and published in Science, shows the covid-19 virus becoming seasonal—that is, staging a winter resurgence every year or two as immunity in the population builds up and then ebbs away.

After testing different scenarios, the Harvard group concluded that their projections of how many people end up getting covid-19 in the coming years depended “most crucially” on “the extent of population immunity, whether immunity wanes, and at what rate.” In other words, the critical factor in projecting the path of the outbreak is also a total unknown.

Seasonal virus

Because so many other human coronaviruses are mild, they haven’t gotten the same attention as influenza, a shape-shifting virus that is closely followed and genetically analyzed to create a new vaccine each year. But it’s not even known, for instance, whether the common coronaviruses mutate in ways that let them evade the immune system, or whether there are other reasons immunity is so short-lived.

“There is no global surveillance of coronavirus,” says Burtram Fielding, a virologist at the University of the Western Cape, in South Africa, who tracks scientific reports in the field. “Even though the common cold costs the US $20 billion a year, these viruses don’t kill, and anything that does not kill, we don’t have surveillance for.”

The Global Virome Project in Manhattan, led by Shaman with funding from the Defense Department, has been an exception. It set out to detect respiratory viruses with the eventual aim of “nowcasting,” or having a live tracker on common infections circulating in the city.

One finding of the research is that people who got the same coronavirus twice didn’t have fewer symptoms the second time. Instead, some people never got symptoms at all; others had bad colds two or three times. Shaman says the severity of infection tended to run in families, suggesting a genetic basis.  

The big question is what this fizzling, short-lived resistance to common cold viruses means for covid-19. Is there a chance the disease will turn into a killer version of the common cold, constantly out there, infecting 10% or 20% of the population each year, but also continuing to kill one in a hundred? If so, it would amount to a plague capable of shaving the current rate of world population growth by a tenth.

Some scientists find the question too dark to contemplate. Shaman didn’t want to guess at how covid-19 will behave either. “Basically, we have some unresolved questions,” he wrote in an email. “Are people one and done with this virus? If not, how often will we experience repeat infections? Finally, will those repeat infections be milder, just as severe, or even worse?”

Immune surveys

Big studies of immunity are already under way to try to answer those questions. Germany has plans to survey its population for antibodies to the virus, and in North America, 10,000 players and other employees of Major League Baseball are giving pinprick blood samples for study. In April, the US National Institutes of Health launched the COVID-19 Pandemic Serum Sampling Study, which it says will collect blood from 10,000 people, too.

By checking for antibodies in people’s blood, such serosurveys can determine how many people have been exposed to the virus, including those who had no symptoms or only mild ones.

Researchers will also be scavenging through the blood of covid-19 cases in order to measure the nature and intensity of immune responses, and to figure out if there’s a connection to how sick people got. “What we are seeing right now with the coronavirus is the need for immune monitoring, because some people are shrugging this off and others are dying,” Davis says. “The gradient is serious and no one really understands why.”

Our immune system has different mechanisms for responding to germs we’ve never seen before. Antibodies, made by B cells, coat a virus and don’t let it infect cells. T cells, meanwhile, regulate the immune response or destroy infected cells. Once an infection is past, long-term “memory” versions of either type of cell can form.

What sort of immune memory will covid-19 cause? Stephen Elledge, a geneticist at Harvard University says the severity of the disease could put it in a different category from the ordinary cold. “You might have a cold for a week, whereas if you go through three weeks of hell, that may give you more of a memory for longer,” he says.

Other clues come from the 2002-03 outbreak of SARS, a respiratory infection even more deadly than covid-19. Six years after the SARS outbreak, doctors in Beijing went hunting for an immune response among survivors. They found no antibodies or long-lived memory B cells, but they did find memory T cells.

Because doctors managed to stop the SARS outbreak after about 8,000 cases, there’s never been a chance for anyone to get infected a second time, but those T cells could be a sign of ongoing immunity. A later vaccine study in mice found that memory T cells protected the animals from the worst effects when scientists tried infecting them again with SARS.

To Frieman, at the University of Maryland, all this uncertainty about immune response to coronaviruses means there’s still little chance of predicting when, or how, the outbreak ends. “I don’t know when this goes away, and if anyone says they know, they don’t know what they are talking about,” he says.

11:22

Google’s medical AI was super accurate in a lab. Real life was a different story. [MIT Technology Review]

The covid-19 pandemic is stretching hospital resources to the breaking point in many countries in the world. It is no surprise that many people hope  AI could speed up patient screening and ease the strain on clinical staff. But a study from Google Health—the first to look at the impact of a deep-learning tool in real clinical settings—reveals that even the most accurate AIs can actually make things worse if not tailored to the clinical environments in which they will work.

Existing rules for deploying AI in clinical settings, such as the standards for FDA clearance in the US or a CE mark in Europe, focus primarily on accuracy. There are no explicit requirements that an AI must improve the outcome for patients, largely because such trials have not yet run. But that needs to change, says Emma Beede, a UX researcher at Google Health: “We have to understand how AI tools are going to work for people in context—especially in health care—before they’re widely deployed.” 

Google’s first opportunity to test the tool in a real setting came from Thailand. The country’s ministry of health has set an annual goal to screen 60% of people with diabetes for diabetic retinopathy, which can cause blindness if not caught early. But with around 4.5 million patients to only 200 retinal specialists—roughly double the ratio in the US—clinics are struggling to meet the target. Google has CE mark clearance, which covers Thailand, but it is still waiting for FDA approval. So to see if AI could help, Beede and her colleagues outfitted 11 clinics across the country with a deep-learning system trained to spot signs of eye disease in patients with diabetes. 

In the system Thailand had been using, nurses take photos of patients’ eyes during check-ups and send them off to be looked at by a specialist elsewhere­—a process that can take up to 10 weeks. The AI developed by Google Health can identify signs of diabetic retinopathy from an eye scan with more than 90% accuracy—which the team calls “human specialist level”—and, in principle, give a result in less than 10 minutes. The system analyzes images for telltale indicators of the condition, such as blocked or leaking blood vessels. 

Sounds impressive. But an accuracy assessment from a lab goes only so far. It says nothing of how the AI will perform in the chaos of a real-world environment, and this is what the Google Health team wanted to find out. Over several months they observed nurses conducting eye scans and interviewed them about their experiences using the new system. The feedback wasn’t entirely positive.

When it worked well, the AI did speed things up. But it sometimes failed to give a result at all. Like most image recognition systems, the deep-learning model had been trained on high-quality scans; to ensure accuracy, it was designed to reject images that fell below a certain threshold of quality. With nurses scanning dozens of patients an hour and often taking the photos in poor lighting conditions, more than a fifth of the images were rejected.

Patients whose images were kicked out of the system were told they would have to visit a specialist at another clinic on another day. If they found it hard to take time off work or did not have a car, this was obviously inconvenient. Nurses felt frustrated, especially when they believed the rejected scans showed no signs of disease and the follow-up appointments were unnecessary. They sometimes wasted time trying to retake or edit an image that the AI had rejected.

A nurse operates the retinal scanner, taking images of the back of a patient’s eye. (Google)

Because the system had to upload images to the cloud for processing, poor internet connections in several clinics also caused delays. “Patients like the instant results, but the internet is slow and patients then complain,” said one nurse. “They’ve been waiting here since 6 a.m., and for the first two hours we could only screen 10 patients.”

The Google Health team is now working with local medical staff to design new workflows. For example, nurses could be trained to use their own judgment in borderline cases. The model itself could also be tweaked to handle imperfect images better. 

Risking a backlash

“This is a crucial study for anybody interested in getting their hands dirty and actually implementing AI solutions in real-world settings,” says Hamid Tizhoosh at the University of Waterloo in Canada, who works on AI for medical imaging. Tizhoosh is very critical of what he sees as a rush to announce new AI tools in response to covid-19. In some cases tools are developed and models released by teams with no health-care expertise, he says. He sees the Google study as a timely reminder that establishing accuracy in a lab is just the first step.

Michael Abramoff, an eye doctor and computer scientist at the University of Iowa Hospitals and Clinics, has been developing an AI for diagnosing retinal disease for several years and is CEO of a spinoff startup called IDx Technologies, which has collaborated with IBM Watson. Abramoff has been a cheerleader for health-care AI in the past, but he also cautions against a rush, warning of a backlash if people have bad experiences with AI. “I’m so glad that Google shows they’re willing to look into the actual workflow in clinics,” he says. “There is much more to health care than algorithms.”

Abramoff also questions the usefulness of comparing AI tools with human specialists when it comes to accuracy. Of course, we don’t want an AI to make a bad call. But human doctors disagree all the time, he says—and that’s fine. An AI system needs to fit into a process where sources of uncertainty are discussed rather than simply rejected.

Get it right and the benefits could be huge. When it worked well, Beede and her colleagues saw how the AI made people who were good at their jobs even better. “There was one nurse that screened 1,000 patients on her own, and with this tool she’s unstoppable,” she says. “The patients didn’t really care that it was an AI rather than a human reading their images. They cared more about what their experience was going to be.”

Correction: The opening line was amended to make it clear not all countries are being overwhelmed.

09:26

05:00

Releasing kubectl support in Access [The Cloudflare Blog]

Releasing kubectl support in Access

Starting today, you can use Cloudflare Access and Argo Tunnel to securely manage your Kubernetes cluster with the kubectl command-line tool.

We built this to address one of the edge cases that stopped all of Cloudflare, as well as some of our customers, from disabling the VPN. With this workflow, you can add SSO requirements and a zero-trust model to your Kubernetes management in under 30 minutes.

Once deployed, you can migrate to Cloudflare Access for controlling Kubernetes clusters without disrupting your current kubectl workflow, a lesson we learned the hard way from dogfooding here at Cloudflare.

What is kubectl?

A Kubernetes deployment consists of a cluster that contains nodes, which run the containers, as well as a control plane that can be used to manage those nodes. Central to that control plane is the Kubernetes API server, which interacts with components like the scheduler and manager.

kubectl is the Kubernetes command-line tool that developers can use to interact with that API server. Users run kubectl commands to perform actions like starting and stopping the nodes, or modifying other elements of the control plane.

In most deployments, users connect to a VPN that allows them to run commands against that API server by addressing it over the same local network. In that architecture, user traffic to run these commands must be backhauled through a physical or virtual VPN appliance. More concerning, in most cases the user connecting to the API server will also be able to connect to other addresses and ports in the private network where the cluster runs.

How does Cloudflare Access apply?

Cloudflare Access can secure web applications as well as non-HTTP connections like SSH, RDP, and the commands sent over kubectl. Access deploys Cloudflare’s network in front of all of these resources. Every time a request is made to one of these destinations, Cloudflare’s network checks for identity like a bouncer in front of each door.

Releasing kubectl support in Access

If the request lacks identity, we send the user to your team’s SSO provider, like Okta, AzureAD, and G Suite, where the user can login. Once they login, they are redirected to Cloudflare where we check their identity against a list of users who are allowed to connect. If the user is permitted, we let their request reach the destination.

In most cases, those granular checks on every request would slow down the experience. However, Cloudflare Access completes the entire check in just a few milliseconds. The authentication flow relies on Cloudflare’s serverless product, Workers, and runs in every one of our data centers in 200 cities around the world. With that distribution, we can improve performance for your applications while also authenticating every request.

How does it work with kubectl?

To replace your VPN with Cloudflare Access for kubectl, you need to complete two steps:

  • Connect your cluster to Cloudflare with Argo Tunnel
  • Connect from a client machine to that cluster with Argo Tunnel
Releasing kubectl support in Access

Connecting the cluster to Cloudflare

On the cluster side, Cloudflare Argo Tunnel connects those resources to our network by creating a secure tunnel with the Cloudflare daemon, cloudflared. As an administrator, you can run cloudflared in any space that can connect to the kubectl API server over TCP.

Once installed, an administrator authenticates the instance of cloudflared by logging in to a browser with their Cloudflare account and choosing a hostname to use. Once selected, Cloudflare will issue a certificate to cloudflared that can be used to create a subdomain for the cluster.

Next, an administrator starts the tunnel. In the example below, the hostname value can be any subdomain of the hostname selected in Cloudflare; the url value should be the API server for the cluster.

cloudflared tunnel --hostname cluster.site.com --url tcp://kubernetes.docker.internal:6443 --socks5=true 

This should be run as a systemd process to ensure the tunnel reconnects if the resource restarts.

Connecting as an end user

End users do not need an agent or client application to connect to web applications secured by Cloudflare Access. They can authenticate to on-premise applications through a browser, without a VPN, like they would for SaaS tools. When we apply that same security model to non-HTTP protocols, we need to establish that secure connection from the client with an alternative to the web browser.

Unlike our SSH flow, end users cannot modify kubeconfig to proxy requests through cloudflared. Pull requests have been submitted to add this functionality to kubeconfig, but in the meantime users can set an alias to serve a similar function.

First, users need to download the same cloudflared tool that administrators deploy on the cluster. Once downloaded, they will need to run a corresponding command to create a local SOCKS proxy. When the user runs the command, cloudflared will launch a browser window to prompt them to login with their SSO and check that they are allowed to reach this hostname.

$ cloudflared access tcp --hostname cluster.site.com url 172.0.0.3:1234

The proxy allows your local kubectl tool to connect to cloudflared via a SOCKS5 proxy, which helps avoid issues with TLS handshakes to the cluster itself. In this model, TLS verification can still be exchanged with the kubectl API server without disabling or modifying that flow for end users.

Users can then create an alias to save time when connecting. The example below aliases all of the steps required to connect in a single command. This can be added to the user’s bash profile so that it persists between restarts.

$ alias kubeone=”env HTTPS_PROXY=socks5://127.0.0.3:1234 kubectl

A (hard) lesson when dogfooding

When we build products at Cloudflare, we release them to our own organization first. The entire company becomes a feature’s first customer, and we ask them to submit feedback in a candid way.

Cloudflare Access began as a product we built to solve our own challenges with security and connectivity. The product impacts every user in our team, so as we’ve grown, we’ve been able to gather more expansive feedback and catch more edge cases.

The kubectl release was no different. At Cloudflare, we have a team that manages our own Kubernetes deployments and we went to them to discuss the prototype. However, they had more than just some casual feedback and notes for us.

They told us to stop.

We had started down an implementation path that was technically sound and solved the use case, but did so in a way that engineers who spend all day working with pods and containers would find to be a real irritant. The flow required a small change in presenting certificates, which did not feel cumbersome when we tested it, but we do not use it all day. That grain of sand would cause real blisters as a new requirement in the workflow.

With their input, we stopped the release, and changed that step significantly. We worked through ideas, iterated with them, and made sure the Kubernetes team at Cloudflare felt this was not just good enough, but better.

What’s next?

Support for kubectl is available in the latest release of the cloudflared tool. You can begin using it today, on any plan. More detailed instructions are available to get started.

If you try it out, please send us your feedback! We’re focused on improving the ease of use for this feature, and other non-HTTP workflows in Access, and need your input.

New to Cloudflare for Teams? You can use all of the Teams products for free through September, including Cloudflare Access and Argo Tunnel. You can learn more about the program, and request a dedicated onboarding session, here.

02:30

The tech industry turns to mask diplomacy [MIT Technology Review]

As the coronavirus spread from China across the world earlier this year, two friends in Sydney watched in horror. Milton Zhou is a cofounder of a renewable energy company called the Maoneng Group, which developed some of Australia’s largest solar farms. Saul Khan is a former partner in an energy efficiency consultancy. They met in a Facebook group for startups, where they bonded over a discussion about using blockchain to track goods as they’re shipped internationally. They had experience buying solar panels and other products from China, and they expected the medical supply chain to work, at least in the early stage of the outbreak. Instead, they watched as health-care workers ran out of respirators and other critical supplies. “We realized, okay, something is really wrong here,” says Khan. “People aren’t able to source things quickly.” Then it occurred to them that maybe they could help.

As demand for masks, respirators, and other personal protective equipment (PPE) has skyrocketed around the world, medical supplies have become a new geopolitical flashpoint. Officials have accused each other of hijacking shipments by buying them off the tarmac or by seizing them en route. When a US buyer allegedly diverted a batch of Chinese-made respirators bound for the Berlin police, for example, a German official denounced the act as “modern piracy.” But there is a bright spot. Amid the chaos, tech industry veterans have arranged shipments of high-quality goods, using both their political clout and their access to private jets. “A lot of hospitals tend to buy locally,” says Khan. “They’re used to local classifications and to not having to deal with import-export paperwork. People in tech are more global. They have the reach.” The result is good public relations, at a moment when the tech industry desperately needs it.

Khan and Zhou set up a nonprofit called RapidWard that buys medical supplies on behalf of governments and hospitals around the world, handling all the logistics for nominal fees—in some cases chartering airplanes to ensure that they arrive on time. Until their first clients paid up, Zhou fronted the money for orders himself. In the United States, a group of venture capitalists and technologists—several of them with the San Francisco emerging-technology investment firm 8VC—have created a similar organization called Operation Masks. Both nonprofits are now in great demand. Since its inception in late January, RapidWard has taken orders for $111 million worth of goods for front-line workers in Italy, Iran, and Switzerland, among other countries. Other shipments have been arranged by Chinese tech companies and foundations looking to burnish their image outside China, including the telecommunications giant Huawei, gaming and social-media conglomerate Tencent, and the Alibaba and  Jack Ma Foundations, which are both linked to Jack Ma, the founder of e-commerce firm Alibaba.

China produces the bulk of the world’s PPE. In January and February, as the coronavirus tore through Wuhan and the country went into lockdown, Chinese medical supply factories ramped up production, which was supplemented by an influx of donated supplies from the United States and Europe.

Then China went back to work and its government tried to jump-start the economy. With the virus spreading throughout the rest of the world, demand for medical equipment soared to the point that factory owners in the industry began boasting that they owned yinqianji: banknote-printing machines. As orders of cars, clothes, and other consumer goods dwindled, desperate manufacturers who specialized in these products switched their lines over to make masks, gloves, and gowns. Some of them had the clean rooms and know-how needed to make PPE. Others did not.

With demand exploding, full payment up front became the norm. Fraud and counterfeit products proliferated.In early April, the Chinese government rolled out measures intended to clamp down on counterfeit PPE, and it became even trickier to ship products out of China. Buyers panicked. “No one is thinking seriously these days,” says Renaud Anjoran, a manufacturing supply chain auditor based in Hong Kong. “People are wiring money into the personal account of a guy in an apartment playing middleman, for transfers of $2 million.”

Speculators abound. Aku Zhang, vice president for international sales with CMICS Medical Instrument Company in Shanghai, says that he is regularly approached by traders who want to pay cash for tens of millions of KN95 masks, a high-quality Chinese respirator. He assumes that the buyers are connected to governments but adds that he has no way to know for certain.

On the other end of the supply chain are hospitals and governments, whose purchasing teams are typically conservative in their decision-making and unused to dealing with complex supply-chain issues. Before the outbreak, they relied on medical distributors. Now, with distributors overtaxed, purchasing officers wake up every morning to emails from unfamiliar brokers. “We’re getting a different type of spam,” says Dan Rogan, who purchases supplies for jails, juvenile facilities, and first responders in Minnesota’s Hennepin County. Lily Liu, a cofounder of Operation Masks, says it’s understandable that purchasers are overwhelmed: “It’s as if you went from shopping in a grocery store to having to vet a cattle farm just in order to eat a steak.” In the United States, a lack of national leadership on sourcing has exacerbated the problem.

Plane loading supplies coronavirus
A cargo flight carrying over 6 million medical items including face masks, test kits, face shields, and protective suits from Guangzhou arrives in Addis Ababa on March 22. The supplies were were donated by the Jack Ma Foundation and Alibaba Foundation and will be distributed from Ethiopia to countries throughout Africa.
AP PHOTO/MULUGETA AYENE

Tech companies, with their global workforces and capital to spare at a time when most other industries are contracting, are attempting to fill the gap. “We know how to build organizations, and we have the ability to build online platforms,” says Liu. (She previously cofounded Earn.com, a cryptocurrency startup that was acquired by Coinbase in 2018 for $120 million.)

The Chinese tech giants, in particular, have experience navigating complex and ever-evolving government regulations. They also need the image boost. Smartphone maker Xiaomi, which sells low-cost devices in the developing world, donated respirators to India and Italy. Tencent assisted New England Patriots owner Robert Kraft with an airlift of protective equipment that flew from Shenzhen to Boston on the NFL team’s 767.

In most cases, such donations have been arranged independent of the Chinese government, which is separately rewarding political allies with PPE. In March, China began sending goodwill airlifts of supplies and teams of experts to countries it considers friendly, including Pakistan, the Philippines, and Ukraine. When a Chinese medical team arrived in Serbia, President Aleksandar Vučić went so far as to kiss the Chinese flag. Even as American doctors pleaded for masks, and photos circulated on social media of nurses in New York wearing trash bags as protection, none of these government donations went to the United States. “That’s a sign — a country that’s rapidly becoming an epicenter of the pandemic and also has a desperate need for PPE actually is not receiving the donations of masks,” says Yanzhong Huang, a senior fellow for global health at the Council on Foreign Relations. An article published by state news agency Xinhua in early March warned that China could use export bans and “strategic control over medical products” to plunge the United States “into the mighty sea of coronavirus.”

But then Alibaba cofounder Jack Ma donated 500,000 coronavirus testing kits and a million masks to the United States. “All the best to our friends in America,” he tweeted. The shipment was received and distributed by the Centers for Disease Control and Prevention in Atlanta, according to the Alibaba Group. (The CDC did not respond to a request for comment.) The Alibaba and Jack Ma Foundations have also published a handbook for global health-care workers explaining how to treat patients with covid-19 and donated medical supplies throughout the world—including to 54 African countries.  (Brian Wong, a vice president at Alibaba, is among the leaders of Operation Masks. A spokesperson for the foundations declined to comment on the shipments.)

For many Chinese tech companies, swooping in as the savior is shrewd publicity. “They have the funds and the political clout, and it’s good for their business mission,” says J. Norwell Coquillard, executive director of the Washington State China Relations Council, a Seattle-based lobbying group helping local health-care buyers vet suppliers of PPE.

In some cases the companies also have something to prove. Before the outbreak, Huawei was bidding to build 5G wireless networks throughout the world in the face of US efforts to thwart it. The company was waging a separate battle in Canada, where chief financial officer Meng Wanzhou is under house arrest in Vancouver, pending extradition to the United States on fraud charges. In early April, Huawei raised eyebrows when it quietly donated a large stock of masks and respirators to Canada. The Vancouver Sun reported that British Columbia received hundreds of thousands of masks and respirators. Huawei has also donated medical supplies to communities across the United States, as well as to various countries in Europe, and provided free or discounted AI-driven diagnostic technologies, intended to screen for covid-19, to Ecuador and the Philippines. Joy Tan, a senior vice president for Huawei in the United States, says that the firm wants to “use our technologies and solutions to help fight the crisis,” but would not comment on its donations of masks and other PPE or confirm how much the company has donated to specific countries.

At a moment when federal purchasing is in disarray, some think there’s room for American companies with a presence in China to chip in as well. “Trump has basically outsourced a lot of American policy to corporations,” says Coquillard. Companies that are active in China could now help the federal government procure massive orders, he adds: “I don’t see why he doesn’t just say, ‘Hey, guys, get it done!’”

Khan and Zhou, meanwhile, sometimes question their own sanity. Even with a nonprofit that supplies only to front-line workers, they routinely hear from shady brokers, some of whom appear to be connected to organized crime. They also run into logistics issues that don’t have an easy solution. They recently shipped a package to Australia that ended up in the Netherlands because of a mistake in a tracking number. “To be honest, it’s a bit scary,” says Khan. “We’re assuming the risk.”

Sunday, 26 April

08:40

Saturday Morning Breakfast Cereal - Monster Under the Bed [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I would like brownie points from the literary community for the skaldic alliteration.


Today's News:

05:00

Doctors are now social-media influencers. They aren’t all ready for it. [MIT Technology Review]

When President Donald Trump suggested during a press conference that doctors should look into treating covid-19 patients with an “injection inside” of disinfectant, “or almost a cleaning,” Austin Chiang, a gastroenterologist at Thomas Jefferson University Hospital in Philadelphia, knew he had to react. 

In his lab coat and scrubs, a stethoscope draped around his neck, and staring directly into the camera, Chiang sat in front of a news headline about Trump’s comments and mimicked screaming. 

“I promise I won’t pretend to know how to run a country if you don’t pretend to know how to practice medicine,” Chiang wrote on the screen. The video, posted shortly after Trump’s comments, quickly gained tens of thousands of views.

Chiang is one of a new generation of doctors and medical professionals who have built online followings on platforms like TikTok, Instagram, and Youtube.  Their medical credentials give their thoughts on the virus added weight

While doctors made famous by TV have had to apologize for downplaying the virus and suggesting that losing some lives was an acceptable cost for reopening schools, some of the new doctor-influencers are positioning themselves differently. At their best, this wave of popular experts can combat misinformation by making responsible medicine sound almost as exciting as the scores of medical conspiracy theories, exaggerated claims, and snake oil promises that spread rapidly online. 

For some, it’s a gap that was waiting to be filled. Natural-healing personalities peddling dubious information were “early adopters” of social media, says Renee DiResta, a researcher with the Stanford Internet Observatory who studies health disinformation. By the time platforms like Facebook and YouTube began cracking down on bogus health claims, their promoters were already selling “cures” in Facebook groups, racking up millions of views on YouTube, and showing up in Google results. 

“They sell their ‘cures’ using the same techniques that brands use to sell shoes,” DiResta says. “With an added layer of mystique via framings that suggest elite knowledge, like ‘The cure THEY don’t want you to know about!’” 

Science-based medical professionals are playing catch-up. 

The internet’s “black hole” of expertise

“I actually think that the lack of quality physicians on social media has led to the rise of social influencers pedaling miraculous cures and detox teas and all that,” says Mikhail Varshavski, aka “Doctor Mike,” a family physician in New Jersey who has more than 5 million subscribers on YouTube. Until recently, he added, personality-driven medical social media has “just been sort of this black hole where doctors aren’t there because they don’t want to be perceived as unprofessional, and as a result, misinformation thrives.” 

But online fame for doctors and nurses comes with risks that are only heightened by the importance of their  job. And as more and more medical professionals jump online to help guide the public and combat misinformation, there’s an additional risk that they become part of the problem they’re trying to fight. 

The very things that help Austin Chiang reach a younger audience on TikTok can, if he’s not careful, undermine the trust his audience has in medical professionals. You have to be funny to connect on TikTok without seeming cringey or out of touch with the culture of the app. And you have to maintain that position without crossing a line into unethical behavior. There have been, for instance, medical professionals who have used TikTok to mock their patients. And even those with the best of intentions and accurate information can find themselves in trouble when they move to a new medium.

“How do we present ourselves online without eroding the public’s trust in us?” Chiang says. “There’s a lot of people out there who are new to the platform and who will throw something up there without thinking it through.” 

Good intentions gone wrong

Take Jeffrey VanWingen, a doctor who runs a private family practice in western Michigan. He wanted to help the public when he stood in his kitchen before work, filming a video in his scrubs that he believed the world needed to see: “PSA: Grocery Shopping Tips in COVID-19.” It was March 24; the governor of his state was going to issue shutdown orders the following day. VanWingen is not an epidemiologist or a food safety expert, but he did know sterile techniques that, he believed, could be modified to help people keep the coronavirus from coming into their homes along with their groceries.

Although he knew that the risk of someone getting sick from touching groceries was likely very low (grocery shopping’s main risk these days comes from the other people in the store with you), “Even very low is not negligible. It’s not nothing. And and I think my goal was to empower people to help keep their risk of acquiring covid-19 airtight,” he says.

VanWingen’s 13-minute video demonstrated procedures for disinfecting different types of food, his calm voice guiding viewers through dumping food into “clean” containers, disinfecting packaging, and washing produce. The video was shared widely on social media, and passed between friends in email chains, as a panicked public looked for something they could do to take some control as a terrifying virus spread. The video, the first ever on his month-old YouTube channel, gained 25 million views and counting. But the video is also, at points, misleading. 

You should not, as VanWingen initially suggested, wash your produce with soap—it’s better to just rinse fruits and vegetables in cold water, because soap residue can cause digestive issues. And his suggestion to leave groceries outside or in the garage for a few days before bringing them into your home needed a clarification that this would not be a safe procedure for perishable goods. 

VanWingen lobbied YouTube to let him edit the video and remove the portion with potentially harmful advice, but there wasn’t much he could do aside from take the whole thing down. He decided against that, instead littering the video’s description with updates linking to new and more accurate information. But, he says, he still stands by the majority of the advice in the video. 

“If you associate Dr. VanWingen with misinformation, that weighs on me extremely heavily,” he says. Compare with others, he says, his mistake was innocent and would be unlikely to have dire consequences. “There are doctors that I’ve seen that are promoting like, for instance, hydroxychloroquine and maybe even promoting fear,” he says, referring to the unproven and, according to the FDA, potentially dangerous covid-19 treatment that was promoted by Trump. “That is certainly not where I would see myself coming from.” 

“There are doctors I’ve seen promoting hydroxychloroquine and maybe even promoting fear.”

And the people who can get views for a medical message on social media aren’t necessarily the ones most qualified to craft it. Eric Feigl-Ding, an epidemiologist who now has a large following on Twitter thanks to his evocative tweets about covid-19, has found his expertise and analysis questioned by other epidemiologists. 

Varshavski—that is, Doctor Mike—became YouTube’s go-to medical expert after a 2015 Buzzfeed article about his Instagram account dubbed him the “hot doctor.” And although he often stresses to his audience that “expert opinion,” including his, is “the lowest form of evidence,” his viewers are more likely to trust what he says in his videos than they are to track down and read a randomized controlled study on the same topic. That’s not necessarily a bad thing, if the information is sound and clearly presented—and he described his role during the pandemic as essentially turning himself into a mouthpiece and platform for the CDC, the WHO, and leading experts in the field.   

But it’s easy to lose that balance. 

“If you are a doctor and you’re popular and people look to you for guidance, and you believe your expert opinion without any kind of research to substantiate it outweighs that of the guidance from the CDC and WHO, you’ve crossed the line,” he says. 

And that’s the central challenge: people will turn to the internet for information during a health crisis, whether it’s a personal crisis or one facing the entire world. But the best, most accurate information isn’t always packaged and optimized in a way that is appealing to a curious public searching for certainty. For every CDC video about the latest studies on the coronavirus, there’s someone out there claiming to be the one person willing to tell you what “doctors don’t want you to know.” Alongside that is a president amplifying potentially dangerous ideas so that they become significant news stories. 

Doctors becoming brands

There’s another challenge facing these doctor-influencers, too: branding and money. Personalities like Doctor Mike can make accurate information interesting by becoming influencers, but they also have to figure out a way to do that without falling into ethical quicksand. 

People become famous online by becoming human brands. But “turning ourselves into brands can also drive people in a different direction,” says Chiang. “Some people out there are aligning us with big pharma already. The last thing they want to see is that we are selling a product or idea.” 

Varshavski, like many content creators, accepts sponsors for his Instagram and YouTube accounts, but he says he has to make sure that those sponsorships don’t look like medical endorsements. Chiang, who also serves as the chief medical social-media officer for his hospital, has to carefully screen which TikTok challenges he participates in, and the songs he uses with them, to avoid associating his image and that of his profession with something offensive or tasteless. Chiang is informative on TikTok, but he manages to engage effectively with how people already use the app. And that’s not always something doctors are capable of—or interested in trying to learn how to do.  

“Historically, there’s never been any sort of teaching in medical training in how to communicate on a public level with our communities and our patients,” he says.

Online fame takes skill and maintenance to a degree that most people underestimate. And especially for doctors and other people who work in fields that are targets of disinformation, there are some more serious risks. Chiang points out that some companies will simply steal content from medical professionals on social media and use it to sell their products. And battling medical misinformation online can anger those who believe in it, potentially endangering the personal safety of doctors who try to take it on. 

But Chiang and Varshavski say that the risks are worth it, especially if having more doctors online helps people find better information about their health. 

As doctors who are on the internet but treat real patients too, they can see firsthand how misinformation affects people. In one recent weekend Varshavski treated five covid-19 patients with mild symptoms, and each asked for hydroxychloroquine, a risky possible treatment that can cause serious heart issues in some patients. Some told Varshavski that they heard about it on TV. 

Saturday, 25 April

10:01

Saturday Morning Breakfast Cereal - Evolved [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Lots of creatures whine, but only humans whine about being humans.


Today's News:

05:00

Covid-19 has blown apart the myth of Silicon Valley innovation [MIT Technology Review]

The frustration in Marc Andreessen’s post on our failure to prepare and respond competently to the coronavirus pandemic is palpable, and his diagnosis is adamant: “a failure of action, and specifically our widespread inability to ‘build.’” Why don’t we have vaccines and medicines, or even masks and ventilators? He writes: “We could have these things but we chose not to­—specifically we chose not to have the mechanisms, the factories, the systems to make these things. We chose not to ‘build.’”

Forgetting for a moment that this is coming from the same guy who famously explained in 2011 “why software is eating the world,” Andreessen, an icon of Silicon Valley, does have a point. As George Packer has written in the Atlantic, the coronavirus pandemic has revealed much of what is broken and decayed in politics and society in America. Our inability to make the medicines and stuff that we desperately need, like personal protective gear and critical care supplies, is a deadly example.

Silicon Valley and big tech in general have been lame in responding to the crisis. Sure, they have given us Zoom to keep the fortunate among us working and Netflix to keep us sane; Amazon is a savior these days for those avoiding stores; iPads are in hot demand and Instacart is helping to keep many self-isolating people fed. But the pandemic has also revealed the limitations and impotence of the world’s richest companies (and, we have been told, the most innovative place on earth) in the face of the public health crisis.

Big tech doesn’t build anything. It’s not likely to give us vaccines or diagnostic tests. We don’t even seem to know how to make a cotton swab. Those hoping the US could turn its dominant tech industry into a dynamo of innovation against the pandemic will be disappointed.

It’s not a new complaint. A decade ago, in the aftermath of what we once called “the” great recession, Andrew Grove, a Silicon Valley giant from earlier era, wrote a piece in Bloomberg BusinessWeek decrying the loss of America’s manufacturing prowess. He described how Silicon Valley was built by engineers intent on scaling up their inventions; “the mythical moment of creation in the garage, as technology goes from prototype to mass production.” Grove said those who argued that we should let “tired old companies that do commodity manufacturing die” were wrong: scaling up and mass-producing products means building factories and hiring thousands of workers.

But Grove wasn’t just worried about the lost jobs as production of iPhones and microchips went overseas. He wrote: “Losing the ability to scale will ultimately damage our capacity to innovate.”

The pandemic has made clear this festering problem: the US is no longer very good at coming up with new ideas and technologies relevant to our most basic needs. We’re great at devising shiny, mainly software-driven bling that makes our lives more convenient in many ways. But we’re far less accomplished at reinventing health care, rethinking education, making food production and distribution more efficient, and, in general, turning our technical know-how loose on the largest sectors of the economy.

Economists like to measure technological innovation as productivity growth—the impact of new stuff and new ideas on expanding the economy and making us richer. Over the last two decades, those numbers for the US have been dismal. Even as Silicon Valley and the high-tech industries boomed, productivity growth slowed.

The last decade has been particularly disappointing, says John Van Reenen, an MIT economist who has recently written about the problem (pdf). He argues that innovation is the only way for an advanced country like the US to grow over the long run. There’s plenty of debate over the reasons behind sluggish productivity growth—but, Van Reenen says, there’s also ample evidence that a lack of business- and government-funded R&D is a big factor.

His analysis is particularly relevant because as the US begins to recover from the covid-19 pandemic and restart businesses, we will be desperate for ways to create high-wage jobs and fuel economic growth. Even before the pandemic, Van Reenen proposed “a massive pool of R&D resources that are invested in areas where market failures are the most substantial, such as climate change.” Already, many are renewing calls for a green stimulus and greater investments in badly needed infrastructure.

So yes, let’s build! But as we do, let’s keep in mind one of the most important failures revealed by covid-19: our diminished ability to innovate in areas that truly count, like health care and climate change. The pandemic could be the wake-up call the country needs to begin to address those problems.

Friday, 24 April

08:51

Saturday Morning Breakfast Cereal - Lent [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
No YOU'RE posting this at the wrong time of year.


Today's News:

08:00

Coming soon: Fedora on Lenovo laptops! [Fedora Magazine]

Today, I’m excited to share some big news with you—Fedora Workstation will be available on Lenovo ThinkPad laptops! Yes, I know,  many of us already run a Fedora operating system on a Lenovo system, but this is different. You’ll soon be able to get Fedora pre-installed by selecting it as you customize your purchase. This is a pilot of Lenovo’s Linux Community Series – Fedora Edition, beginning with ThinkPad P1 Gen2, ThinkPad P53, and ThinkPad X1 Gen8 laptops, possibly expanding to other models in the future.

The Lenovo team has been working with folks at Red Hat who work on Fedora desktop technologies to make sure that the upcoming Fedora 32 Workstation is ready to go on their laptops. The best part about this is that we’re not bending our rules for them. Lenovo is following our existing trademark guidelines and respects our open source principles. That’s right—these laptops ship with software exclusively from the official Fedora repos! When they ship, you’ll see Fedora 32 Workstation. (Models which can benefit from the NVIDIA binary driver can install it in the normal way after the fact, by opting in to proprietary software sources.) 

Obviously, this is huge for us. Our installer aims to make the complicated process of installing Fedora to replace another operating system as easy as possible, but it’s still a barrier even for tech-literate people. A major-brand laptop with Fedora pre-installed will help bring Fedora to a wider audience. That and Lenovo’s commitment to fixing issues as part of the community means that everyone benefits from their Linux engineering work in the true spirit of open source collaboration. 

As Mark Pearson, Sr. Linux Developer, from Lenovo said, “Lenovo is excited to become a part of the  Fedora community. We want to ensure an optimal Linux experience on our products. We are committed to working with and learning from the open source community.” Mark Pearson will be the featured guest in May’s Fedora Council Video Meeting – get your questions ready.

I’ll have more details about this project as we get closer to the launch. In the meantime, I invite you to come to our Open Neighborhood virtual booth at Red Hat Summit on April 28-29. The entire event is free and open to all.

05:00

Stream Firewall Events directly to your SIEM [The Cloudflare Blog]

Stream Firewall Events directly to your SIEM
Stream Firewall Events directly to your SIEM

The highest trafficked sites using Cloudflare receive billions of requests per day. But only about 5% of those requests typically trigger security rules, whether they be “managed” rules such as our WAF and DDoS protections, or custom rules such as those configured by customers using our powerful Firewall Rules and Rate Limiting engines.

When enforcement is taken on a request that interrupts the flow of malicious traffic, a Firewall Event is logged with detail about the request including which rule triggered us to take action and what action we took, e.g., challenged or blocked outright.

Previously, if you wanted to ingest all of these events into your SIEM or logging platform, you had to take the whole firehose of requests—good and bad—and then filter them client side. If you’re paying by the log line or scaling your own storage solution, this cost can add up quickly. And if you have a security team monitoring logs, they’re being sent a lot of extraneous data to sift through before determining what needs their attention most.

As of today, customers using Cloudflare Logs can create Logpush jobs that send only Firewall Events. These events arrive much faster than our existing HTTP requests logs: they are typically delivered to your logging platform within 60 seconds of sending the response to the client.

In this post we’ll show you how to use Terraform and Sumo Logic, an analytics integration partner, to get this logging set up live in just a few minutes.

Process overview

The steps below take you through the process of configuring Cloudflare Logs to push security events directly to your logging platform. For purposes of this tutorial, we’ve chosen Sumo Logic as our log destination, but you’re free to use any of our analytics partners, or any logging platform that can read from cloud storage such as AWS S3, Azure Blob Storage, or Google Cloud Storage.

To configure Sumo Logic and Cloudflare we make use of Terraform, a popular Infrastructure-as-Code tool from HashiCorp. If you’re new to Terraform, see Getting started with Terraform and Cloudflare for a guided walkthrough with best practice recommendations such as how to version and store your configuration in git for easy rollback.

Once the infrastructure is in place, you’ll send a malicious request towards your site to trigger the Cloudflare Web Application Firewall, and watch as the Firewall Events generated by that request shows up in Sumo Logic about a minute later.

Stream Firewall Events directly to your SIEM

Prerequisites

Install Terraform and Go

First you’ll need to install Terraform. See our Developer Docs for instructions.

Next you’ll need to install Go. The easiest way on macOS to do so is with Homebrew:

$ brew install golang
$ export GOPATH=$HOME/go
$ mkdir $GOPATH

Go is required because the Sumo Logic Terraform Provider is a "community" plugin, which means it has to be built and installed manually rather than automatically through the Terraform Registry, as will happen later for the Cloudflare Terraform Provider.

Install the Sumo Logic Terraform Provider Module

The official installation instructions for installing the Sumo Logic provider can be found on their GitHub Project page, but here are my notes:

$ mkdir -p $GOPATH/src/github.com/terraform-providers && cd $_
$ git clone https://github.com/SumoLogic/sumologic-terraform-provider.git
$ cd sumologic-terraform-provider
$ make install

Prepare Sumo Logic to receive Cloudflare Logs

Install Sumo Logic livetail utility

While not strictly necessary, the livetail tool from Sumo Logic makes it easy to grab the Cloudflare Logs challenge token we’ll need in a minute, and also to view the fruits of your labor: seeing a Firewall Event appear in Sumo Logic shortly after the malicious request hit the edge.

On macOS:

$ brew cask install livetail
...
==> Verifying SHA-256 checksum for Cask 'livetail'.
==> Installing Cask livetail
==> Linking Binary 'livetail' to '/usr/local/bin/livetail'.
🍺  livetail was successfully installed!

Generate Sumo Logic Access Key

This step assumes you already have a Sumo Logic account. If not, you can sign up for a free trial here.

  1. Browse to https://service.$ENV.sumologic.com/ui/#/security/access-keys where $ENV should be replaced by the environment you chose on signup.
  2. Click the "+ Add Access Key" button, give it a name, and click "Create Key"
  3. In the next step you'll save the Access ID and Access Key that are provided as environment variables, so don’t close this modal until you do.

Generate Cloudflare Scoped API Token

  1. Log in to the Cloudflare Dashboard
  2. Click on the profile icon in the top-right corner and then select "My Profile"
  3. Select "API Tokens" from the nav bar and click "Create Token"
  4. Click the "Get started" button next to the "Create Custom Token" label

On the Create Custom Token screen:

  1. Provide a token name, e.g., "Logpush - Firewall Events"
  2. Under Permissions, change Account to Zone, and then select Logs and Edit, respectively, in the two drop-downs to the right
  3. Optionally, change Zone Resources and IP Address Filtering to restrict restrict access for this token to specific zones or from specific IPs

Click "Continue to summary" and then "Create token" on the next screen. Save the token somewhere secure, e.g., your password manager, as it'll be needed in just a minute.

Set environment variables

Rather than add sensitive credentials to source files (that may get submitted to your source code repository), we'll set environment variables and have the Terraform modules read from them.

$ export CLOUDFLARE_API_TOKEN="<your scoped cloudflare API token>"
$ export CF_ZONE_ID="<tag of zone you wish to send logs for>"

We'll also need your Sumo Logic environment, Access ID, and Access Key:

$ export SUMOLOGIC_ENVIRONMENT="eu"
$ export SUMOLOGIC_ACCESSID="<access id from previous step>"
$ export SUMOLOGIC_ACCESSKEY="<access key from previous step>"

Create the Sumo Logic Collector and HTTP Source

We'll create a directory to store our Terraform project in and build it up as we go:

$ mkdir -p ~/src/fwevents && cd $_

Then we'll create the Collector and HTTP source that will store and provide Firewall Events logs to Sumo Logic:

$ cat <<'EOF' | tee main.tf
##################
### SUMO LOGIC ###
##################
provider "sumologic" {
    environment = var.sumo_environment
    access_id = var.sumo_access_id
}

resource "sumologic_collector" "collector" {
    name = "CloudflareLogCollector"
    timezone = "Etc/UTC"
}

resource "sumologic_http_source" "http_source" {
    name = "firewall-events-source"
    collector_id = sumologic_collector.collector.id
    timezone = "Etc/UTC"
}
EOF

Then we'll create a variables file so Terraform has credentials to communicate with Sumo Logic:

$ cat <<EOF | tee variables.tf
##################
### SUMO LOGIC ###
##################
variable "sumo_environment" {
    default = "$SUMOLOGIC_ENVIRONMENT"
}

variable "sumo_access_id" {
    default = "$SUMOLOGIC_ACCESSID"
}
EOF

With our Sumo Logic configuration set, we’ll initialize Terraform with terraform init and then preview what changes Terraform is going to make by running terraform plan:

$ terraform init

Initializing the backend...

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # sumologic_collector.collector will be created
  + resource "sumologic_collector" "collector" {
      + destroy        = true
      + id             = (known after apply)
      + lookup_by_name = false
      + name           = "CloudflareLogCollector"
      + timezone       = "Etc/UTC"
    }

  # sumologic_http_source.http_source will be created
  + resource "sumologic_http_source" "http_source" {
      + automatic_date_parsing       = true
      + collector_id                 = (known after apply)
      + cutoff_timestamp             = 0
      + destroy                      = true
      + force_timezone               = false
      + id                           = (known after apply)
      + lookup_by_name               = false
      + message_per_request          = false
      + multiline_processing_enabled = true
      + name                         = "firewall-events-source"
      + timezone                     = "Etc/UTC"
      + url                          = (known after apply)
      + use_autoline_matching        = true
    }

Plan: 2 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Assuming everything looks good, let’s execute the plan:

$ terraform apply -auto-approve
sumologic_collector.collector: Creating...
sumologic_collector.collector: Creation complete after 3s [id=108448215]
sumologic_http_source.http_source: Creating...
sumologic_http_source.http_source: Creation complete after 0s [id=150364538]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Success! At this point you could log into the Sumo Logic web interface and confirm that your Collector and HTTP Source were created successfully.

Create a Cloudflare Logpush Job

Before we’ll start sending logs to your collector, you need to demonstrate the ability to read from it. This validation step prevents accidental (or intentional) misconfigurations from overrunning your logs.

Tail the Sumo Logic Collector and await the challenge token

In a new shell window—you should keep the current one with your environment variables set for use with Terraform—we'll start tailing Sumo Logic for events sent from the firewall-events-source HTTP source.

The first time that you run livetail you'll need to specify your Sumo Logic Environment, Access ID and Access Key, but these values will be stored in the working directory for subsequent runs:

$ livetail _source=firewall-events-source
### Welcome to Sumo Logic Live Tail Command Line Interface ###
1 US1
2 US2
3 EU
4 AU
5 DE
6 FED
7 JP
8 CA
Please select Sumo Logic environment: 
See http://help.sumologic.com/Send_Data/Collector_Management_API/Sumo_Logic_Endpoints to choose the correct environment. 3
### Authenticating ###
Please enter your Access ID: <access id>
Please enter your Access Key <access key>
### Starting Live Tail session ###

Request and receive challenge token

Before requesting a challenge token, we need to figure out where Cloudflare should send logs.

We do this by asking Terraform for the receiver URL of the recently created HTTP source. Note that we modify the URL returned slightly as Cloudflare Logs expects sumo:// rather than https://.

$ export SUMO_RECEIVER_URL=$(terraform state show sumologic_http_source.http_source | grep url | awk '{print $3}' | sed -e 's/https:/sumo:/; s/"//g')

$ echo $SUMO_RECEIVER_URL
sumo://endpoint1.collection.eu.sumologic.com/receiver/v1/http/<redacted>

With URL in hand, we can now request the token.

$ curl -sXPOST -H "Content-Type: application/json" -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" -d '{"destination_conf":"'''$SUMO_RECEIVER_URL'''"}' https://api.cloudflare.com/client/v4/zones/$CF_ZONE_ID/logpush/ownership

{"errors":[],"messages":[],"result":{"filename":"ownership-challenge-bb2912e0.txt","message":"","valid":true},"success":true}

Back in the other window where your livetail is running you should see something like this:

{"content":"eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4R0NNIiwidHlwIjoiSldUIn0..WQhkW_EfxVy8p0BQ.oO6YEvfYFMHCTEd6D8MbmyjJqcrASDLRvHFTbZ5yUTMqBf1oniPNzo9Mn3ZzgTdayKg_jk0Gg-mBpdeqNI8LJFtUzzgTGU-aN1-haQlzmHVksEQdqawX7EZu2yiePT5QVk8RUsMRgloa76WANQbKghx1yivTZ3TGj8WquZELgnsiiQSvHqdFjAsiUJ0g73L962rDMJPG91cHuDqgfXWwSUqPsjVk88pmvGEEH4AMdKIol0EOc-7JIAWFBhcqmnv0uAXVOH5uXHHe_YNZ8PNLfYZXkw1xQlVDwH52wRC93ohIxg.pHAeaOGC8ALwLOXqxpXJgQ","filename":"ownership-challenge-bb2912e0.txt"}

Copy the content value from above into an environment variable, as you'll need it in a minute to create the job:

$ export LOGPUSH_CHALLENGE_TOKEN="<content value>"

Create the Logpush job using the challenge token

With challenge token in hand, we'll use Terraform to create the job.

First you’ll want to choose the log fields that should be sent to Sumo Logic. You can enumerate the list by querying the dataset:

$ curl -sXGET -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" https://api.cloudflare.com/client/v4/zones/$CF_ZONE_ID/logpush/datasets/firewall_events/fields | jq .
{
  "errors": [],
  "messages": [],
  "result": {
    "Action": "string; the code of the first-class action the Cloudflare Firewall took on this request",
    "ClientASN": "int; the ASN number of the visitor",
    "ClientASNDescription": "string; the ASN of the visitor as string",
    "ClientCountryName": "string; country from which request originated",
    "ClientIP": "string; the visitor's IP address (IPv4 or IPv6)",
    "ClientIPClass": "string; the classification of the visitor's IP address, possible values are: unknown | clean | badHost | searchEngine | whitelist | greylist | monitoringService | securityScanner | noRecord | scan | backupService | mobilePlatform | tor",
    "ClientRefererHost": "string; the referer host",
    "ClientRefererPath": "string; the referer path requested by visitor",
    "ClientRefererQuery": "string; the referer query-string was requested by the visitor",
    "ClientRefererScheme": "string; the referer url scheme requested by the visitor",
    "ClientRequestHTTPHost": "string; the HTTP hostname requested by the visitor",
    "ClientRequestHTTPMethodName": "string; the HTTP method used by the visitor",
    "ClientRequestHTTPProtocol": "string; the version of HTTP protocol requested by the visitor",
    "ClientRequestPath": "string; the path requested by visitor",
    "ClientRequestQuery": "string; the query-string was requested by the visitor",
    "ClientRequestScheme": "string; the url scheme requested by the visitor",
    "Datetime": "int or string; the date and time the event occurred at the edge",
    "EdgeColoName": "string; the airport code of the Cloudflare datacenter that served this request",
    "EdgeResponseStatus": "int; HTTP response status code returned to browser",
    "Kind": "string; the kind of event, currently only possible values are: firewall",
    "MatchIndex": "int; rules match index in the chain",
    "Metadata": "object; additional product-specific information. Metadata is organized in key:value pairs. Key and Value formats can vary by Cloudflare security product and can change over time",
    "OriginResponseStatus": "int; HTTP origin response status code returned to browser",
    "OriginatorRayName": "string; the RayId of the request that issued the challenge/jschallenge",
    "RayName": "string; the RayId of the request",
    "RuleId": "string; the Cloudflare security product-specific RuleId triggered by this request",
    "Source": "string; the Cloudflare security product triggered by this request",
    "UserAgent": "string; visitor's user-agent string"
  },
  "success": true
}

Then you’ll append your Cloudflare configuration to the main.tf file:

$ cat <<EOF | tee -a main.tf

##################
### CLOUDFLARE ###
##################
provider "cloudflare" {
  version = "~> 2.0"
}

resource "cloudflare_logpush_job" "firewall_events_job" {
  name = "fwevents-logpush-job"
  zone_id = var.cf_zone_id
  enabled = true
  dataset = "firewall_events"
  logpull_options = "fields=RayName,Source,RuleId,Action,EdgeResponseStatusDatetime,EdgeColoName,ClientIP,ClientCountryName,ClientASNDescription,UserAgent,ClientRequestHTTPMethodName,ClientRequestHTTPHost,ClientRequestHTTPPath&timestamps=rfc3339"
  destination_conf = replace(sumologic_http_source.http_source.url,"https:","sumo:")
  ownership_challenge = "$LOGPUSH_CHALLENGE_TOKEN"
}
EOF

And add to the variables.tf file:

$ cat <<EOF | tee -a variables.tf

##################
### CLOUDFLARE ###
##################
variable "cf_zone_id" {
  default = "$CF_ZONE_ID"
}

Next we re-run terraform init to install the latest Cloudflare Terraform Provider Module. You’ll need to make sure you have at least version 2.6.0 as this is the version in which we added Logpush job support:

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "cloudflare" (terraform-providers/cloudflare) 2.6.0...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

With the latest Terraform installed, we check out the plan and then apply:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

sumologic_collector.collector: Refreshing state... [id=108448215]
sumologic_http_source.http_source: Refreshing state... [id=150364538]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # cloudflare_logpush_job.firewall_events_job will be created
  + resource "cloudflare_logpush_job" "firewall_events_job" {
      + dataset             = "firewall_events"
      + destination_conf    = "sumo://endpoint1.collection.eu.sumologic.com/receiver/v1/http/(redacted)"
      + enabled             = true
      + id                  = (known after apply)
      + logpull_options     = "fields=RayName,Source,RuleId,Action,EdgeResponseStatusDatetime,EdgeColoName,ClientIP,ClientCountryName,ClientASNDescription,UserAgent,ClientRequestHTTPMethodName,ClientRequestHTTPHost,ClientRequestHTTPPath&timestamps=rfc3339"
      + name                = "fwevents-logpush-job"
      + ownership_challenge = "(redacted)"
      + zone_id             = "(redacted)"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
$ terraform apply --auto-approve
sumologic_collector.collector: Refreshing state... [id=108448215]
sumologic_http_source.http_source: Refreshing state... [id=150364538]
cloudflare_logpush_job.firewall_events_job: Creating...
cloudflare_logpush_job.firewall_events_job: Creation complete after 3s [id=13746]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Success! Last step is to test your setup.

Testing your setup by sending a malicious request

The following step assumes that you have the Cloudflare WAF turned on. Alternatively, you can create a Firewall Rule to match your request and generate a Firewall Event that way.

First make sure that livetail is running as described earlier:

$ livetail "_source=firewall-events-source"
### Authenticating ###
### Starting Live Tail session ###

Then in a browser make the following request https://example.com/<script>alert()</script>. You should see the following returned:

Stream Firewall Events directly to your SIEM

And a few moments later in livetail:

{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"958052","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"958051","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"973300","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"973307","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"973331","Action":"log","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}
{"RayName":"58830d3f9945bc36","Source":"waf","RuleId":"981176","Action":"drop","EdgeColoName":"LHR","ClientIP":"203.0.113.69","ClientCountryName":"gb","ClientASNDescription":"NTL","UserAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","ClientRequestHTTPMethodName":"GET","ClientRequestHTTPHost":"upinatoms.com"}

Note that for this one malicious request Cloudflare Logs actually sent 6 separate Firewall Events to Sumo Logic. The reason for this is that this specific request triggered a variety of different Managed Rules: #958051, 958052, 973300, 973307, 973331, and 981176.

Seeing it all in action

Here's a demo of  launching livetail, making a malicious request in a browser, and then seeing the result sent from the Cloudflare Logpush job:

Stream Firewall Events directly to your SIEM

Thursday, 23 April

17:52

Cloud Jewels: Estimating kWh in the Cloud [Code as Craft]

Blue jewels rain down from golden clouds in these Lightning Storm Earrings by GojoDesign on Etsy.

Image: Lightning Storm Earrings, GojoDesign on Etsy

Etsy has been increasingly enjoying the perks of public cloud infrastructure for a few years, but has been missing a crucial feature: we’ve been unable to measure our progress against one of our key impact goals for 2025 — to reduce our energy intensity by 25%. Cloud providers generally do not disclose to customers how much energy their services consume. To make up for this lack of data, we created a set of conversion factors called Cloud Jewels to help us roughly convert our cloud usage information (like Google Cloud usage data) into approximate energy used. We are publishing this research to begin a conversation and a collaboration that we hope you’ll join, especially if you share our concerns about climate change.

This isn’t meant as a replacement for energy use data or guidance from Google, Amazon or another provider. Nor can we guarantee the accuracy of the rough estimates the tool provides. Instead, it’s meant to give us a sense of energy usage and relative changes over time based on aggregated data on how we use the cloud, in light of publicly-available information.

A little background

In the face of a changing climate, we at Etsy are committed to reducing our ecological footprint. In 2017, we set a goal of reducing the intensity of our energy use by 25% by 2025, meaning we should use less energy in proportion to the size of our business. In order to evaluate ourselves against our 25% energy intensity reduction goal, we have historically measured our energy usage across our footprint, including the energy consumption of servers in our data centers.

Three graphs to illustrate decreased energy intensity (increased efficiency), showing that energy usage would grow less quickly than business size.

In early 2020, we finished our two-year migration from our own physical servers in a colocated data center to Google Cloud. In addition to the massive increase in the power and flexibility of our computing capabilities, the move was a win for our sustainability efforts because of the efficiency of Google’s data centers. Our old data centers had an average PUE (Power Usage Effectiveness) of 1.39 (FY18 average across colocated data centers), whereas Google’s data centers have a combined average PUE of 1.10. PUE is a ratio of the total amount of energy a data center uses to how much energy goes to powering computers. It captures how efficient factors like the building itself and air conditioning are in the data center.

Illustration of PUE (Power Usage Effectiveness) as the ratio of overall power used by a datacenter to the power used by computers in it.

While a lower PUE helps our energy footprint significantly, we need to be able to measure and optimize the amount of power that our servers draw. Knowing how much energy each of our workloads uses helps us make design and code decisions that optimize for sustainability. The Google Cloud team has been a terrific partner to us throughout our migration, but they are unable to provide us with data about our cloud energy consumption. This is a challenge across the industry: neither Amazon Web Services nor Microsoft Azure provide this information to customers. We have heard concerns that range from difficulties attributing energy use to individual customers to sensitivities around proprietary information that could reveal too much about cloud providers’ operations and financial position.

We thought about how we might be able to estimate our energy consumption in Google Cloud using the data we do have: Google provides us with usage data that shows us how many virtual CPU (Central Processing Unit) seconds we used, how much memory we requested for our servers, how many terabytes of data we have stored for how long, and how much networking traffic we were responsible for. 

Our supposition was that if we could come up with general estimates for how many watt-hours (Wh) compute, storage and networking draw in a cloud environment, particularly based on public information, then we could apply those coefficients to our usage data to get at least a rough estimate of our cloud computing energy impact.

We are calling this set of estimated conversion factors Cloud Jewels. Other cloud computing consumers can look at this and see how it might work with their own energy usage across providers and usage data. The goal is to help cloud users across the industry to help refine our estimates, and ultimately help us encourage cloud providers to empower their customers with more accurate cloud energy consumption data.

Illustration of what Cloud Jewels seeks to quantify: the power used by compute and storage.

Methodology

The sources that most influenced our methodology were the U.S. Data Center Energy Usage Report, The Data Center as a Computer, and the SPEC power report. We also spoke with industry experts Arman Shehabi, Jon Koomey, and Jon Taylor, who suggested additional resources and reviewed our methodology.

We roughly assumed that we could attribute power used to: 

  • running a virtual server (compute), 
  • memory (RAM), 
  • storage, and 
  • networking.

Using the resources we found online, we were able to determine what we think are reasonable, conservative estimates for the amount of energy that compute and storage tasks consume. We are aiming for a conservative over-estimate of energy consumed to make sure we are holding ourselves fully accountable for our computing footprint. We have yet to determine a reasonable way to estimate the impact of RAM or network usage, but we welcome contributions to this work! We are open-sourcing a script for others to apply these coefficients to their usage data, and the full methodology is detailed in our repository on Github.

Cloud Jewels coefficients

The following coefficients are our estimates for how many watt-hours (Wh) it takes to run a virtual server and how many watt-hours (Wh) it takes to store a terabyte of data on HDD (hard disk drive) or SSD (solid-state drive) disks in a cloud computing environment:

2.10 Wh per vCPUh [Server]

0.89 Wh/TBh for HDD storage [Storage]

1.52 Wh/TBh for SSD storage [Storage]

On confidence

As you may note: we are using point estimates without confidence intervals. This is partly intentional and highlights the experimental nature of this work. Our sources also provide single, rough estimates without confidence intervals, so we decided against numerically estimating our confidence so as to not provide false precision. Our work has been reviewed by several industry experts and our energy and carbon metrics for cloud computing have been assured by PricewaterhouseCoopers LLP. That said, we acknowledge that this estimation methodology is only a first step in giving us visibility into the ecological impacts of our cloud computing usage, which may evolve as our understanding improves. Whenever there has been a choice, we have erred on the side of conservative estimates, taking responsibility for more energy consumption than we are likely using to avoid overestimating our savings. While we have limited data, we are using these estimates as a jumping-off point and carrying forth in order to push ourselves and the industry forward. We especially welcome contributions and opinions. Let the conversation begin!

Server wattage estimate

At a high level, to estimate server wattage, we used a general formula for calculating server energy use over time:

W = Min + Util*(Max – Min)

Wattage = Minimum wattage + Average CPU Utilization * (Maximum wattage – minimum wattage)

A graph portrays CPU Utilization increasing and decreasing over time.

To determine minimum and maximum CPU wattage, we averaged the values reported by manufacturers of servers that are available in the SPEC power database (filtered to servers that we deemed likely to be similar to Google’s servers), and we used an industry average server utilization estimate (45%) from the US Data Center Energy Usage Report.

Storage wattage estimate

To estimate storage wattage, we used industry-wide estimates from the U.S. Data Center Usage Report. That report contains estimated average capacity of disks as well as average disk wattage. We used both those estimates to get an estimated wattage per terabyte.

Networking non-estimate

The resources we found related to networking energy estimates were for general internet data transfer, as opposed to intra data center traffic between connected servers. Networking also made up a significantly smaller portion of our overall usage cost, so we are assuming it requires less energy than compute and storage. Finally, as far as the research we found indicated, the energy attributable to networking is generally far smaller than that attributable to compute and storage.

A graph shows trends of US data center electricity use from 2000-2020. Two alternative scenarios begin in 2010; a steeply increasing line portrays the increase in usage if efficiency remained at its 2010 level through 2020. A decreasing line portrays the electricity usage if best practices for energy usage were adopted.
Source: Arman Shehabi, Sarah J Smith, Eric Masanet and Jonathan Koomey; Data center growth in the United States: decoupling the demand for services from electricity use; 2018

Application to usage data

We aggregated and grouped our usage data by SKU then categorized it by which type of service was applicable (“compute”, “storage”, “n/a”), converted the units to hours and terabyte-hours, then applied our coefficients. Since we do not yet have a coefficient for networking or RAM that we feel confident in, we are leaving that data out for now. The experts we have consulted with are confident that our coefficients are conservative enough to account for our overall energy consumption without separate consideration for networking and RAM.

Results

Applying our Cloud Jewels coefficients to our aggregated usage data and comparing the estimates to our former local data center actual kWh totals over the past two years indicates that our energy footprint in Google Cloud is smaller than it was on premises. It’s important to note that we are not taking into account networking or RAM, nor Google-maintained services like BigQuery, BigTable, StackDriver, or App Engine. However, overall, relatively speaking over time (assuming our estimates are even moderately close to accurate and verified to be conservative), we are on track to be using less overall energy to do more computing than we were two years ago (as our business has grown), meaning we are making progress towards our energy intensity reduction goal. 

A graph displays monthly total kWh used with Etsy's former datacenter (actual) compared to with Google Cloud (estimated). Google Cloud kWh is significantly lower.

We used historical data to estimate what our energy savings are since moving to Google Cloud.

A graph shows estimated annual consumption in kWh with Etsy's former colocated datacenter (actual) compared to with Google Cloud (estimated). Google Cloud annual consumption is significantly lower.

Assumes ~16% YoY growth in former colocated data centers and actual/expected ~23% YoY growth in cloud usage between 2019-20 and beyond.

Our estimated savings over the five year period are roughly equivalent to: 

  • ~20,000 sewing machines (running 24/7) 
  • ~147,000 light bulbs (running 24/7) 
  • ~1,200 dishwashers (running 24/7) 

Next steps

We would next like to find ways to estimate the energy cost of network traffic and memory. There are also minor refinements we could make to our current estimates, though we want to ensure that further detail does not lead to false precision, that we do not overcomplicate the methodology, and that the work we publish is as generally applicable and useful to other companies as possible.

Part of our reasoning for open-sourcing this work is selfish: we want input! We welcome contributions to our estimates and additional resources that we should be using to refine them. We hope that publishing these coefficients will help other companies who use cloud computing providers estimate their energy footprint. And finally we hope that efforts and estimations encourage more public information about cloud energy usage, and particularly help cloud providers find ways to determine and deliver data like this, either as broad coefficients for estimation or actual energy usage metrics collected from their internal monitoring.

09:08

Internet performance during the COVID-19 emergency [The Cloudflare Blog]

Internet performance during the COVID-19 emergency

A month ago I wrote about changes in Internet traffic caused by the COVID-19 emergency. At the time I wrote:

Cloudflare is watching carefully as Internet traffic patterns around the world alter as people alter their daily lives through home-working, cordon sanitaire, and social distancing. None of these traffic changes raise any concern for us. Cloudflare's network is well provisioned to handle significant spikes in traffic. We have not seen, and do not anticipate, any impact on our network's performance, reliability, or security globally.

That holds true today; our network is performing as expected under increased load. Overall the Internet has shown that it was built for this: designed to handle huge changes in traffic, outages, and a changing mix of use. As we are well into April I thought it was time for an update.

Growth

Here's a chart showing the relative change in Internet use as seen by Cloudflare since the beginning of the year. I've calculated moving average of the trailing seven days for each country and use December 29, 2019 as the reference point.

Internet performance during the COVID-19 emergency

On this chart the highest growth in Internet use has been in Portugal: it's currently running at about a 50% increase with Spain close behind followed by the UK. Italy flattened out at about a 40% increase in usage towards the end of March and France seems to be plateauing at a little over 30% up on the end of last year.

It's interesting to see how steeply Internet use grew in the UK, Spain and Portugal (the red, yellow and blue lines rise very steeply), with Spain and Portugal almost in unison and the UK lagging by about two weeks.

Looking at some other major economies we see other, yet similar patterns.

Internet performance during the COVID-19 emergency

Similar increases in utilization are seen here. The US, Canada, Australia and Brazil are all running at between 40% and 50% the level of use at the beginning of the year.

Stability

We measure the TCP RTT (round trip time) between our servers and visitors to Internet properties that are Cloudflare customers. This gives us a measure of the speed of the networks between us and end users, and if the RTT increases it is also a measure of congestion along the path.

Looking at TCP RTT over the last 90 days can help identify changes in congestion or the network. Cloudflare connects widely to the Internet via peering (and through the use of transit) and we connect to the largest number of Internet exchanges worldwide to ensure fast access for all users.

Cloudflare is also present in 200 cities worldwide; thus the TCP RTT seen by Cloudflare gives a measure of the performance of end-user networks within a country. Here's a chart showing the median and 95th percentile TCP RTT in the UK in the last 90 days.

Internet performance during the COVID-19 emergency

What's striking in this chart is that despite the massive increase in Internet use (the grey line), the TCP RTT hasn't changed significantly. From our vantage point UK networks are coping well.

Here's the situation in Italy:

Internet performance during the COVID-19 emergency

The picture here is slightly different. Both median and 95th percentile TCP RTT increased as traffic increased. This indicates that networks aren't operating as smoothly in Italy. It's noticeable, though, that as traffic has plateaued the TCP RTT has improved somewhat (take a look at the 95th percentile) indicating that ISPs and other network providers in Italy have likely taken action to improve the situation.

This doesn't mean that Italian Internet is in trouble, just that it's strained more than, say, the Internet in the UK.

Conclusion

The Internet has seen incredible, sudden growth in traffic but continues to operate well. What Cloudflare sees reflects what we've heard anecdotally: some end-user networks are feeling the strain of the sudden change of load but are working and helping us all cope with the societal effects of COVID-19.

It's hard to imagine another utility (say electricity, water or gas) coping with a sudden and continuous increase in demand of 50%.

08:52

Saturday Morning Breakfast Cereal - Time Travel [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
The easiest way to ruin all time travel movies is to think about time travel for more than 3 seconds.


Today's News:

02:00

Play Stadia Games from Fedora [Fedora Magazine]

Do you enjoy playing games on your Fedora system? You might be interested to know that Stadia is available to play via a Google Chrome browser on your Fedora desktop. Additionally, Stadia is free for two months starting April 8th. Follow these simple steps to install the Google Chrome web browser in Fedora and enjoy the new world of cloud-based gaming on your Fedora Linux PC!

  1. Go to https://www.google.com/chrome using any available web browser and click the big blue button labeled Download Chrome.
  2. Select the 64 bit .rpm (For Fedora/openSUSE) package format and then click Accept and Install.
  3. You should be presented with a prompt asking what you want to do with the file. Choose the Open with Software Install option if you see this prompt.
  4. Click Install in the Software Install application to install Google Chrome. You may be prompted for your password to authorize the installation.

If you don’t see the Open with Software Install option at step 3, choose to save the installer to your Downloads folder instead. Once you have the installer downloaded, enter the following command in a terminal using sudo:

$ sudo dnf install ~/Downloads/google-chrome-*.rpm

Once you have Google Chrome installed, use it to browse to https://stadia.google.com/ and follow the directions there to create your user profile and try out the games.

Chrome installation demonstration

Chrome installation on Fedora 31

Additional resources


Photo by Derek Story on Unsplash.

Wednesday, 22 April

Tuesday, 21 April

09:18

Saturday Morning Breakfast Cereal - A [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I miss the good old days when a story could just end with the kids getting eaten. Would've made the Narnia books a lot better.


Today's News:

05:00

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole [The Cloudflare Blog]

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Like many who are able, I am working remotely and in this post, I describe some of the ways to deploy Cloudflare Gateway directly from your home. Gateway’s DNS filtering protects networks from malware, phishing, ransomware and other security threats. It’s not only for corporate environments - it can be deployed on your browser or laptop to protect your computer or your home WiFi. Below you will learn how to deploy Gateway, including, but not limited to, DNS over HTTPS (DoH) using a Raspberry Pi, Pi-hole and DNSCrypt.

We recently launched Cloudflare Gateway and shortly thereafter, offered it for free until at least September to any company in need. Cloudflare leadership asked the global Solutions Engineering (SE) team, amongst others, to assist with the incoming onboarding calls. As an SE at Cloudflare, our role is to learn new products, such as Gateway, to educate, and to ensure the success of our prospects and customers. We talk to our customers daily, understand the challenges they face and consult on best practices. We were ready to help!

One way we stay on top of all the services that Cloudflare provides, is by using them ourselves. In this blog, I'll talk about my experience setting up Cloudflare Gateway.

Gateway sits between your users, device or network and the public Internet. Once you setup Cloudflare Gateway, the service will inspect and manage all Internet-bound DNS queries. In simple terms, you point your recursive DNS to Cloudflare and we enforce policies you create, such as activating SafeSearch, an automated filter for adult and offensive content that's built into popular search engines like Google, Bing, DuckDuckGo, Yandex and others.

There are various methods and locations DNS filtering can be enabled, whether it’s on your entire laptop, each of your individual browsers and devices or your entire home network. With DNS filtering front of mind, including DoH, I explored each model. The model you choose ultimately depends on your objective.

But first, let’s review what DNS and DNS over HTTPS are.

DNS is the protocol used to resolve hostnames (like www.cloudflare.com) into IP addresses so computers can talk to each other. DNS is an unencrypted clear text protocol, meaning that any eavesdropper or machine between the client and DNS server can see the contents of the DNS request. DNS over HTTPS adds security to DNS and encrypt DNS queries using HTTPS (the protocol we use to encrypt the web).

Let’s get started

Navigate to https://dash.teams.cloudflare.com. If you don’t already have an account, the sign up process only takes a few minutes.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Configuring a Gateway location, shown below, is the first step.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Conceptually similar to HTTPS traffic, when our edge receives an HTTPS request, we match the incoming SNI header to the correct domain’s configuration (or for plain text HTTP the Host header). And when our edge receives a DNS query, we need a similar mapping to identify the correct configuration. We attempt to match configurations, in this order:

  1. DNS over HTTPS check and lookup based on unique hostname
  2. IPv4 check and lookup based on source IPv4 address
  3. Lookup based on IPv6 destination address

Let’s discuss each option.

DNS over HTTPS

The first attempt to match DNS requests to a location is pointing your traffic to a unique DNS over HTTPS hostname. After you configure your first location, you are given a unique destination IPv6 address and a unique DoH endpoint as shown below. Take note of the hostname as you will need it shortly. I’ll first discuss deploying Gateway in a browser and then to your entire network.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

DNS over HTTPS is my favorite method for deploying Gateway and securing DNS queries at the same time. Enabling DoH prevents anyone but the DNS server of your choosing from seeing your DNS queries.

Enabling DNS over HTTPS in browsers

By enabling it in a browser, only queries issued in that browser are affected. It’s available in most browsers and there are quite a few tutorials online to show you how to turn it on.

Browser Supports
DoH
Supports Custom
Alternative Providers
Supports
Custom Servers
Chrome Yes Yes No
Safari No No No
Edge Yes** Yes** No**
Firefox Yes Yes Yes
Opera Yes* Yes* No*
Brave Yes* Yes* No*
Vivaldi Yes* Yes* No*

* Chromium based browser. Same support as Chrome
** Most recent version of Edge is built on Chromium

Chromium based browsers

Using Chrome as an example on behalf of all the Chromium-based browsers, enabling DNS over HTTPS is straightforward, but as you can see in the table above, there is one issue: Chrome does not currently support custom servers. So while it is great that a user can protect their DNS queries, they cannot choose the provider, including Gateway.

Here is how to enable DoH in Chromium based browsers:

Navigate to chrome://flags and toggle the beta flag to enabled.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Firefox

Firefox is the exception to the rule because they support both DNS over HTTPS and the ability to define a custom server. Mozilla provides detailed instructions about how to get started.

Once enabled, navigate to Preferences -> General -> Network Security and select ‘Settings’. Scroll to the section ‘Enable DNS over HTTPS’, select ‘Custom’ and input your Gateway DoH address, as shown below:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Optionally, you can enable Encrypted SNI (ESNI), which is an IETF draft for encrypting the SNI headers, by toggling the ‘network.security.esni.enabled’ preference in about:config to ‘true’. Read more about how Cloudflare is one of the few providers that supports ESNI by default.

Congratulations, you’ve configured Gateway using DNS over HTTPS! Keep in mind that only queries issued from the configured browser will be secured. Any other device connected to your network such as your mobile devices, gaming platforms, or smart TVs will still use your network's default DNS server, likely assigned by your ISP.

Configuring Gateway for your entire home or business network

Deploying Gateway at the router level allows you to secure every device on your network without needing to configure each one individually.

Requirements include:

  • Access to your router's administrative portal
  • A router that supports DHCP forwarding
  • Raspberry Pi with WiFi or Ethernet connectivity

There aren't any consumer routers on the market that natively support DoH custom servers and likely few that natively support DoH at all. A newer router I purchased, the Netgear R7800, does not support either, but it is one of the most popular routers for flashing dd-wrt or open-wrt, which both support DoH. Unfortunately, neither of these popular firmwares support custom servers.

While it’s rare to find a router that supports DoH out of the box, DoH with custom servers, or has potential to be flashed, it’s common for a router to support DHCP forwarding (dd-wrt and open-wrt both support DHCP forwarding). So, I installed Pi-hole on my Raspberry Pi and used it as my home network’s DNS and DHCP server.

Getting started with Pi-hole and dnscrypt-proxy

If your Raspberry Pi is new and hasn’t been configured yet, follow their guide to get started. (Note: by default, ssh is disabled, so you will need a keyboard and/or mouse to access your box in your terminal.)

Once your Raspberry Pi has been initialized, assign it a static IP address in the same network as your router. I hardcoded my router's LAN address to 192.168.1.2

Using vim:
sudo vi /etc/dhcpcd.conf

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Restart the service.
sudo /etc/init.d/dhcpcd restart

Check that your static IP is configured correctly.
ip addr show dev eth0

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Now that your Raspberry Pi is configured, we need to install Pi-hole: https://github.com/pi-hole/pi-hole/#one-step-automated-install

I chose to use dnscrypt-proxy as the local service that Pi-hole will use to forward all DNS queries. You can find the latest version here.

To install dnscrypt-proxy on your pi-hole, follow these steps:

wget https://github.com/DNSCrypt/dnscrypt-proxy/releases/download/2.0.39/dnscrypt-proxy-linux_arm-2.0.39.tar.gz
tar -xf dnscrypt-proxy-linux_arm-2.0.39.tar.gz
mv linux-arm dnscrypt-proxy
cd dnscrypt-proxy
cp example-dnscrypt-proxy.toml dnscrypt-proxy.toml

Next step is to build a DoH stamp. A stamp is simply an encoded DNS address that encodes your DoH server and other options.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

As a reminder, you can find Gateway’s unique DoH address in your location configuration.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

At the very bottom of your dnscrypt-proxy.toml configuration file, uncomment both lines beneath [static].

  • Change  [static.'myserver'] to [static.'gateway']
  • Replace the default stamp with the one generated above

The static section should look similar to this:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Also in dnscrypt-proxy.toml configuration file, update the following settings:
server_names = ['gateway']
listen_addresses = ['127.0.0.1:5054']
fallback_resolvers = ['1.1.1.1:53', '1.0.0.1:53']
cache = false

Now we need to install dnscrypt-proxy as a service and configure Pi-hole to point to the listen_addresses defined above.

Install dnscrypt-proxy as a service:
sudo ./dnscrypt-proxy -service install

Start the service:
sudo ./dnscrypt-proxy -service start

You can validate the status of the service by running:
sudo service dnscrypt-proxy status or netstat -an | grep 5054:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole
Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Also, confirm the upstream is working by querying localhost on port 5054:
dig www.cloudflare.com  -p 5054 @127.0.0.1

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

You will see a matching request in the Gateway query log (note the timestamps match):

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Configuring DNS and DHCP in the Pi-hole administrative console

Open your browser and navigate to the Pi-hole’s administrative console. In my case, it’s http://192.168.1.6/admin

Go to Settings -> DNS to modify the upstream DNS provider, which we’ve just configured to be dnscrypt-proxy.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Change the upstream server to 127.0.0.1#5054 and hit save. According to Pi-hole's forward destination determination algorithm, the fastest upstream DNS server is chosen. Therefore, if you want to deploy redundancy, it has to be done in the DNSCrypt configuration.

Almost done!

In Settings->DHCP, enable the DHCP server:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Hit save.

At this point, your Pi-hole server is running in isolation and we need to deploy it to your network. The simplest way to ensure your Pi-hole is being used exclusively by every device is to use your Pi-hole as both a DNS server and a DHCP server. I’ve found that routers behave oddly if you outsource DNS but not DHCP, so I outsource both.

After I enabled the DHCP server on the Pi-hole, I set the router’s configuration to DHCP forwarding and defined the Pi-hole static address:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

After applying the routers configuration, I confirmed it was working properly by forgetting the network in my network settings and re-joining. This results in a new IPv4 address (from our new DHCP server) and if all goes well, a new DNS server (the IP of Pi-hole).

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole
Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Done!

Now that our entire network is using Gateway, we can configure Gateway Policies to secure our DNS queries!

IPv4 check and lookup based on source IPv4 address

For this method to work properly, Gateway requires that your network has a static IPv4 address. If your IP address does not change, then this is the quickest solution (but still does not prevent third-parties from seeing what domains you are going to). However, if you are configuring Gateway in your home, like I am, and you don’t explicitly pay for this service, then most likely you have a dynamic IP address. These addresses will always change when your router restarts, intentionally or not.

Lookup based on IPv6 destination address

Another option for matching requests in Gateway is to configure your DNS server to point to a unique IPv6 address provided to you by Cloudflare. Any DNS query pointed to this address will be matched properly on our edge.

This might be a good option if you want to use Cloudflare Gateway on your entire laptop by setting your local DNS resolution to this address. However, if your home router or ISP does not support IPv6, DNS resolution won’t work.

Conclusion

In this blog post, we've discussed the various ways Gateway can be deployed and how encrypted DNS is one of the next big Internet privacy improvements. Deploying Gateway can be done on a per device basis, on your router or even with a Raspberry Pi.

Monday, 20 April

07:50

Saturday Morning Breakfast Cereal - Trolley [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
The actual answer to the trolley problem is that on a sufficiently large time scale none of it matters anyway, so just go with your gut.


Today's News:

Sunday, 19 April

09:48

Saturday, 18 April

11:00

Helping with COVID-19 Projects: Cloudflare Workers now free through Project Galileo [The Cloudflare Blog]

Helping with COVID-19 Projects: Cloudflare Workers now free through Project Galileo
Helping with COVID-19 Projects: Cloudflare Workers now free through Project Galileo

The Internet has been vital to our response to the COVID-19 crisis: enabling researchers to communicate with the rest of the world, connecting resources with people who need them, and sharing data about the spread.

It’s been amazing to see some of the projects people have stood up on Cloudflare Workers to assist during this crisis. Workers allows you to get set up in minutes, it’s fast and scalable out of the box, and there’s no infrastructure to maintain or scale, which is great if you want to create a project quickly.

To support critical web projects that help in the fight against the COVID-19 pandemic, we’re giving free access to our Cloudflare Workers compute platform through Project Galileo. We believe sites, apps, APIs, and tools that can help people with COVID-19 are exactly the type of critically important projects that Project Galileo was designed to support.    

Free Cloudflare Workers

One of the earliest impacts of the COVID-19 crisis was the switch that many organizations made to a fully remote model. As that happened, and we realized that many organization’s VPNs were not up to the task of scaling to support this increased load, Cloudflare made Cloudflare for Teams free through at least September 1, 2020.

If you’re working on a COVID-19 related project, follow the Project Galileo link and submit a request — we’ll get back to you as quickly as we can. And if you’re interested in getting started with Workers, there are some links at the bottom of the post that will help.

Example Projects

Amidst all the devastating news, it’s been really inspiring to see  developers jump in and build tools to help the public through the pandemic. We are excited to share a few of the stories they’ve shared with us.

API-COVID19-In

API-COVID-19-In, built by Amod Malviya, is an API for tracking COVID-19 cases in India, sourced from The Ministry of Health and Family Welfare and separately from unofficial sources.

"I created api-covid19-in to make it easier for people working all over India to contribute to fighting this situation — be it by creating mass transparency (the aggregate data API), or detecting patterns (the crowd sourced patient data API), or planning (hospital beds API)".

Why Workers?

  • Very simple to be up & running. From the first code being written, to being up and running happened in less than an hour.
  • Better than the alternatives of maintaining an Origin (higher cost), or exposing it via Github pages (can't do compute on every call).
  • Not having to be worried about scaling or performance.

MakeFaceMasks

A few weeks ago, a Belgian grassroots movement of makers started to brainstorm on how they can fight the COVID-19 crisis. One of the projects is MakeFaceMasks. They have created a DIY manual to sew masks, which has been approved by the Belgian Government.

Why Workers?

  • We could automate our development/translation flow. This allowed us to quickly generate translated versions of the website.
  • Websites are deployed automatically with Github Actions.
  • Handle the load: the day we launched we immediately attracted 100,000 unique visitors with any downtime.

Mask A Hero NY

Mask a Hero NY is a volunteer-run site that matches medical professionals that need Personal Protective Equipment (PPE) during the COVID-19 pandemic with people that can donate it.

"We launched it about 2 weeks ago. The COVID-19 situation in New York is very worrying. My friends that are doctors are doing everything they can to help and they saw a lot of people on Facebook groups offering to donate small amounts of PPE, but it was hard for these people to know where it was needed the most and coordinate pickups. So my friends reached out to me to see if I could help. I pulled in my colleague MJ, and we designed and built the site in about 2 days.

The site has been a big success so far. It has facilitated over 27,000 mask donations already with a lot more to come this week. It's been featured on NBC News, CBS, MSNBC, on Katie Couric's social media and newsletter, some NY-area newspapers, and more. That matters because each feature has been followed by an increase in donation submissions. The site has facilitated donations to a variety of large and small hospitals and medical departments that are feeling the strain during this time. We're really proud of the impact so far but want to do even more to help these medical professionals."

Why Workers?

"When we built the site, we wanted the absolute easiest and most straightforward tech stack. It's a 4-page site with no dynamic information. A static site generator was the obvious choice, so I chose Jekyll. Then for hosting, the last thing I want to deal with on a static site is complex server configuration and uptime. Workers Sites is super easy to deploy - I just run wrangler publish after a Jekyll build. Workers Sites handles cache breaking and has Cloudflare's caching built-in. Most importantly, I don't have to worry about the site going down. We've seen big traffic spikes after being featured in the media. The site has never gotten slower and I don't have to worry. Cloudflare Workers Sites lets us concentrate on helping the people that need it instead of spending time managing hosting."

CovidTracking API

The COVID Tracking Project collects and publishes the most complete testing data available for US states and territories. The project emerged from a tweet of a Google Sheets spreadsheet, where someone was keeping tabs on the testing from each state.

“I had been making something similar but Jeff Hammerbacher had a more complete version. After Jeff combined forces with Alexis Madrigal I thought it best to use the data they had. Since we’ve used Google Sheets to power websites in the past I thought I should spin up a quick service that fetches the sheet data from Google and make it available as JSON for people to use.”  

Why Workers?

“Google often requires an API key or has some strange formatting. I just wanted an array of values that reflected the sheet rows. No long complicated URL. I picked Cloudflare Workers because it works really well as a serverless proxy.

At first the Worker was just a simple proxy, making an API request for every Worker request. Then I added cf: { cacheEverything: true, cacheTtl: 120 } to the fetch() options so Cloudflare could cache the fetch result. Caching the source is great but still requires having to decode, modify and serialize on every request. Some endpoints requested XML from AWS. Since it takes some time to parse really big XML strings we started seeing errors that the process was taking longer than 50ms CPU time. Cloudflare had to (generously) increase our limits to keep things running smoothly.

Not wanting consumers of our API to be kept waiting while the servers crunched the data on every request we started using Cloudflare Key Value storage for saving the parsed and serialized result. We put a TTL limit (like an hour) on every file saved to the KV store. On a new request we return the previous generated result from the cache first and then lookup the TTL of the item and if it’s more than 5 minutes old we make a new request and save it to the cache for next time. This way the user gets a fast result before we update an entry. If no user makes a request for an hour the cached item expires and the next request has to wait for a full process before response but that doesn’t happen for the popular endpoint/query options.”

Get Started

If you’re building a resource to help others cope with COVID-19, and getting started with Workers, below are a few resources to get you started:

  • Workers Sites: allows you to deploy your static site directly to Cloudflare’s network, with a few simple commands. Get started with our tutorial, or video.
  • Tutorials: check out our tutorials to get started with Workers. We’ve highlighted a couple below that we think might be especially useful to you:
  • Localize a website: make your website accessible to an even greater audience by translating it to other languages.
  • Chat bot: with more people using chat for remote communication, chat bots can be a great way to make information more easily accessible at the public’s fingertips.
  • Template gallery: our template gallery is designed to help you build with Workers by providing building blocks such as code snippets and boilerplate. For example, if you are writing an API, we suggest getting started using our Apollo GraphQL server boilerplate.
  • HTMLRewriter API: the HTMLRewriter is a streaming HTML parser with an easy to use selector based JavaScript API for DOM manipulation, available in the Cloudflare Workers runtime. With so much disparate information on the web, many services that provide data about COVID-19 rely on scraping and aggregating data from multiple sources. See an example of the HTMLRewriter in action here to learn how you can use it to extract information from the web.
  • Want to help, but not sure what to build? Our Built with Workers gallery features projects utilizing Workers today to give you an idea of the possibilities of what you can build with Workers.

08:47

Saturday Morning Breakfast Cereal - Sodomy [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If I get in one more sodomy joke this month I hit the Sodomy Hat Trick.


Today's News:

Friday, 17 April

09:00

Is BGP Safe Yet? No. But we are tracking it carefully [The Cloudflare Blog]

Is BGP Safe Yet? No. But we are tracking it carefully

BGP leaks and hijacks have been accepted as an unavoidable part of the Internet for far too long. We relied on protection at the upper layers like TLS and DNSSEC to ensure an untampered delivery of packets, but a hijacked route often results in an unreachable IP address. Which results in an Internet outage.

The Internet is too vital to allow this known problem to continue any longer. It's time networks prevented leaks and hijacks from having any impact. It's time to make BGP safe. No more excuses.

Border Gateway Protocol (BGP), a protocol to exchange routes has existed and evolved since the 1980s. Over the years it has had security features. The most notable security addition is Resource Public Key Infrastructure (RPKI), a security framework for routing. It has been the subject of a few blog posts following our deployment in mid-2018.

Today, the industry considers RPKI mature enough for widespread use, with a sufficient ecosystem of software and tools, including tools we've written and open sourced. We have fully deployed Origin Validation on all our BGP sessions with our peers and signed our prefixes.

However, the Internet can only be safe if the major network operators deploy RPKI. Those networks have the ability to spread a leak or hijack far and wide and it's vital that they take a part in stamping out the scourge of BGP problems whether inadvertent or deliberate.

Many like AT&T and Telia pioneered global deployments of RPKI in 2019. They were successfully followed by Cogent and NTT in 2020. Hundreds networks of all sizes have done a tremendous job over the last few years but there is still work to be done.

If we observe the customer-cones of the networks that have deployed RPKI, we see around 50% of the Internet is more protected against route leaks. That's great, but it's nothing like enough.

Is BGP Safe Yet? No. But we are tracking it carefully

Today, we are releasing isBGPSafeYet.com, a website to track deployments and filtering of invalid routes by the major networks.

We are hoping this will help the community and we will crowdsource the information on the website. The source code is available on GitHub, we welcome suggestions and contributions.

We expect this initiative will make RPKI more accessible to everyone and ultimately will reduce the impact of route leaks. Share the message with your Internet Service Providers (ISP), hosting providers, transit networks to build a safer Internet.

Additionally, to monitor and test deployments, we decided to announce two bad prefixes from our 200+ data centers and via the 233+ Internet Exchange Points (IXPs) we are connected to:

  • 103.21.244.0/24
  • 2606:4700:7000::/48

Both these prefixes should be considered invalid and should not be routed by your provider if RPKI is implemented within their network. This makes it easy to demonstrate how far a bad route can go, and test whether RPKI is working in the real world.

Is BGP Safe Yet? No. But we are tracking it carefully
A Route Origin Authorization for 103.21.244.0/24 on rpki.cloudflare.com

In the test you can run on isBGPSafeYet.com, your browser will attempt to fetch two pages: the first one valid.rpki.cloudflare.com, is behind an RPKI-valid prefix and the second one, invalid.rpki.cloudflare.com, is behind the RPKI-invalid prefix.

The test has two outcomes:

  • If both pages were correctly fetched, your ISP accepted the invalid route. It does not implement RPKI.
  • If only valid.rpki.cloudflare.com was fetched, your ISP implements RPKI. You will be less sensitive to route-leaks.
Is BGP Safe Yet? No. But we are tracking it carefully
a simple test of RPKI invalid reachability

We will be performing tests using those prefixes to check for propagation. Traceroutes and probing helped us in the past by creating visualizations of deployment.

A simple indicator is the number of networks sending the accepted route to their peers and collectors:

Is BGP Safe Yet? No. But we are tracking it carefully
Routing status from online route collection tool RIPE Stat

In December 2019, we released a Hilbert curve map of the IPv4 address space. Every pixel represents a /20 prefix. If a dot is yellow, the prefix responded only to the probe from a RPKI-valid IP space. If it is blue, the prefix responded to probes from both RPKI valid and invalid IP space.

To summarize, the yellow areas are IP space behind networks that drop RPKI invalid prefixes. The Internet isn't safe until the blue becomes yellow.

Is BGP Safe Yet? No. But we are tracking it carefully
Hilbert Curve Map of IP address space behind networks filtering RPKI invalid prefixes

Last but not least, we would like to thank every network that has already deployed RPKI and every developer that contributed to validator-software code bases. The last two years have shown that the Internet can become safer and we are looking forward to the day where we can call route leaks and hijacks an incident of the past.

08:54

Saturday Morning Breakfast Cereal - Interpretation [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Of course, now and then the Universe forgets to check its calendar and so the cat just stays alive.


Today's News:

06:00

Time-Based One-Time Passwords for Phone Support [The Cloudflare Blog]

Time-Based One-Time Passwords for Phone Support
Time-Based One-Time Passwords for Phone Support

As part of Cloudflare’s support offering, we provide phone support to Enterprise customers who are experiencing critical business issues.

For account security, specific account settings and sensitive details are not discussed via phone. From today, we are providing Enterprise customers with the ability to configure phone authentication to allow for greater support to be offered over the phone without need to perform validation through support tickets.

After providing your email address to a Cloudflare Support representative, you can now provide a token generated from the Cloudflare dashboard or via a 2FA app like Google Authenticator. So, a customer is able to prove over the phone that they are who they say they are.

Configuring Phone Authentication

If you are an existing Enterprise customer interested in phone support, please contact your Customer Success Manager for eligibility information and set-up. If you are interested in our Enterprise offering, please get in contact via our Enterprise plan page.

If you already have phone support eligibility, you can generate single-use tokens from the Cloudflare dashboard or configure an authenticator app to do the same remotely.

On the support page, you will see a card called “Emergency Phone Support Hotline – Authentication”. From here you can generate a Single-Use Token for authenticating a single call or configure an Authenticator App to generate tokens from a 2FA app.

Time-Based One-Time Passwords for Phone Support

For more detailed instructions, please see the “Emergency Phone” section of the Contacting Cloudflare Support article on the Cloudflare Knowledge Base.

How it Works

A standardised approach for generating TOTPs (Time-Based One-Time Passwords) is described in RFC 6238 – this is the approach that is often used for setting up Two Factor Authentication on websites.

When configuring a TOTP authenticator app, you are usually asked to scan a QR code or input a long alphanumeric string. This is a randomly generated secret that is shared between your local authenticator app and the web service where you are configuring TOTP. After TOTP is configured, this is stored between both the web server and your local device.

TOTP password generation relies on two key inputs; the shared secret and the number of seconds since the Unix epoch (Unix time). The timestamp is integer divided by a validity period (often 30 seconds) and this value is put into a cryptographic hash function alongside the secret to generate an output. The hexadecimal output is then truncated to provide the decimal digits which are shown to the user. The Avalanche Effect means that whenever the inputs that go into the hash function change slightly (e.g. the timestamp increments), a completely different hash output is generated.

This approach is fairly widely used and is available in a number of libraries depending on your preferred programming language. However, as our phone validation functionality offers both authenticator app support and generation of a single-use token from the dashboard (where no shared secret exists) - some deviation was required.

We generate a single use token by creating a hash of an internal user ID combined with a Cloudflare-internal secret, which in turn is used to generate RFC 6238 compliant time-based one-time passwords. Similarly, this service can generate random passwords for any user without needing to store additional secrets. This is then surfaced to the user every 30 seconds via a JavaScript request without exposing the secret used to generate the token.

Time-Based One-Time Passwords for Phone Support

One question you may be asking yourself after all of this is why don’t we simply use the 2FA mechanism which users use to login for phone validation too? Firstly, we don’t want to accustom users to providing their 2FA tokens to anyone else (they should purely be used for logging in). Secondly, as you may have noticed - we recently began supporting WebAuthn keys for logging in, as these are physical tokens used for website authentication they aren’t suited to usage on a mobile device.

To improve user experience during a phone call, we also validate tokens in the previous time step in the event it has expired by the time the user has read it out (indeed, RFC 6238 provides that “at most one time step is allowed as the network delay”). This means a token can be valid for up to one minute.

The APIs powering this service are then wrapped with API gateways that offer audit logging both for customer actions and actions completed by staff members. This provides a clear audit trail for customer authentication.

Future Work

Authentication is a critical component to securing customer support interactions. Authentication tooling must develop alongside support contact channels; from web forms behind logins to using JWT tokens for validating live chat sessions and now TOTP phone authentication. This is complimented by technical support engineers who will manage risk by routing certain issues into traditional support tickets and being able to refer some cases to named customer success managers for approval.

We are constantly advancing our support experience; for example, we plan to further improve our Enterprise Phone Support by giving users the ability to request a callback from a support agent within our dashboard. As always, right here on our blog we’ll keep you up-to-date with improvements in our service.

Thursday, 16 April

09:16

Saturday Morning Breakfast Cereal - A Change [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I'm pretty sure this has been done to me. I haven't worn socks and sandals in weeks now.


Today's News:

05:00

Cloudflare Workers Now Support COBOL [The Cloudflare Blog]

Cloudflare Workers Now Support COBOL

Recently, COBOL has been in the news as the State of New Jersey has asked for help with a COBOL-based system for unemployment claims. The system has come under heavy load because of the societal effects of the SARS-CoV-2 virus. This appears to have prompted IBM to offer free online COBOL training.

Cloudflare Workers Now Support COBOL

As old as COBOL is (60 years old this month), it is still heavily used in information management systems and pretty much anywhere there’s an IBM mainframe around. Three years ago Thomson Reuters reported that COBOL is used in 43% of banking systems, is behind 80% of in-person financial transactions and 95% of times an ATM card is used. They also reported 100s of billions of lines of running COBOL.

COBOL is often a source of amusement for programmers because it is seen as old, verbose, clunky, and difficult to maintain. And it’s often the case that people making the jokes have never actually written any COBOL. We plan to give them a chance: COBOL can now be used to write code for Cloudflare’s serverless platform Workers.

Here’s a simple “Hello, World!” program written in COBOL and accessible at https://hello-world.cobol.workers.dev/. It doesn’t do much--it just outputs “Hello, World!”--but it does it using COBOL.

        IDENTIFICATION DIVISION.
        PROGRAM-ID. HELLO-WORLD.
        DATA DIVISION.
        WORKING-STORAGE SECTION.
        01 HTTP_OK   PIC X(4)  VALUE "200".
        01 OUTPUT_TEXT PIC X(14) VALUE "Hello, World!".
        PROCEDURE DIVISION.
            CALL "set_http_status" USING HTTP_OK.
            CALL "append_http_body" USING OUTPUT_TEXT.
        STOP RUN.

If you’ve never seen a COBOL program before, it might look very odd. The language emerged in 1960 from the work of a committee designing a language for business (COBOL = COmmon Business Oriented Language) and was intended to be easy to read and understand (hence the verbose syntax). It was partly based on an early language called FLOW-MATIC created by Grace Hopper.

IDENTIFICATION DIVISION.

To put COBOL in context: FORTRAN arrived in 1957, LISP and ALGOL in 1958, APL in 1962 and BASIC in 1964. The C language didn’t arrive on scene until 1972. The late 1950s and early 1960s saw a huge amount of work on programming languages, some coming from industry (such as FORTRAN and COBOL) and others from academia (such as LISP and ALGOL).

COBOL is a compiled language and can easily be compiled to WebAssembly and run on Cloudflare Workers. If you want to get started with COBOL, the GNUCobol project is a good place to begin.

Here’s a program that waits for you to press ENTER and then adds up the numbers 1 to 1000 and outputs the result:

       IDENTIFICATION DIVISION.
       PROGRAM-ID. ADD.
       ENVIRONMENT DIVISION.
       DATA DIVISION.
       WORKING-STORAGE SECTION.
       77 IDX  PICTURE 9999.
       77 SUMX PICTURE 999999.
       77 X    PICTURE X.
       PROCEDURE DIVISION.
       BEGIN.
           ACCEPT X.
           MOVE ZERO TO IDX.
           MOVE ZERO TO SUMX.
           PERFORM ADD-PAR UNTIL IDX = 1001.
           DISPLAY SUMX.
           STOP RUN.
       ADD-PAR.
           COMPUTE SUMX = SUMX + IDX.
           ADD 1 TO IDX.

You can compile it and run it using GNUCobol like this (I put this in a file called terminator.cob)

$ cobc -x terminator.cob
$ ./terminator
500500
$

cobc compiles the COBOL program to an executable file. It can also output a C file containing C code to implement the COBOL program:

$ cobc -C -o terminator.c -x terminator.cob

This .c file can then be compiled to WebAssembly. I’ve done that and placed this program (with small modifications to make it output via HTTP, as in the Hello, World! program above) at https://terminator.cobol.workers.dev/. Note that the online version doesn’t wait for you to press ENTER, it just does the calculation and gives you the answer.

DATA DIVISION.

You might be wondering why I called this terminator.cob. That’s because this is part of the code that appears in The Terminator, James Cameron’s 1984 film. The film features a ton of code from the Apple ][ and a little snippet of COBOL (see the screenshot from the film below).

Cloudflare Workers Now Support COBOL

The screenshot shows the view from one of the HK-Aerial hunter-killer VTOL craft used by Skynet to try to wipe out the remnants of humanity. Using COBOL.

You can learn all about that in this YouTube video I produced:

For those of you of the nerdy persuasion, here’s the original code as it appeared in the May 1984 edition of “73 Magazine” and was copied to look cool on screen in The Terminator.

Cloudflare Workers Now Support COBOL

If you want to scale your own COBOL-implemented Skynet, it only takes a few steps to convert COBOL to WebAssembly and have it run in over 200 cities worldwide on Cloudflare’s network.

PROCEDURE DIVISION.

Here’s how you can take your COBOL program and turn it into a Worker.

There are multiple compiler implementations of the COBOL language and a few of them are proprietary. We decided to use GnuCOBOL (formerly OpenCOBOL) because it's free software.

Given that Cloudflare Workers supports WebAssembly, it sounded quite straightforward: GnuCOBOL can compile COBOL to C and Emscripten compiles C/C++ to WebAssembly. However, we need to make sure that our WebAssembly binary is as small and fast as possible to maximize the time for user-code to run instead of COBOL's runtime.

GnuCOBOL has a runtime library called libcob, which implements COBOL's runtime semantics, using GMP (GNU Multiple Precision Arithmetic Library) for arithmetic. After we compiled both these libraries to WebAssembly and linked against our compiled COBOL program, we threw the WebAssembly binary in a Cloudflare Worker.

It was too big and it hit the CPU limit (you can find Cloudflare Worker’s limits here), so it was time to optimize.

GMP turns out to be a big library, but luckily for us someone made an optimized version for JavaScript (https://github.com/kripken/gmp.js), which was much smaller and reduced the WebAssembly instantiation time. As a side note, it's often the case that functions implemented in C could be removed in favour of a JavaScript implementation already existing on the web platform. But for this project we didn’t want to rewrite GMP.

While Emscripten can emulate a file system with all its syscalls, it didn't seem necessary in a Cloudflare Worker. We patched GnuCOBOL to remove the support for local user configuration and other small things, allowing us to remove the emulated file system.

The size of our Wasm binary is relatively small compared to other languages. For example, around 230KB with optimization enabled for the Game of Life later in this blog post.

Now that we have a COBOL program running in a Cloudflare Worker, we still need a way to generate an HTTP response.

The HTTP response generation and manipulation is written in JavaScript (for now... some changes to WebAssembly are currently being discussed that would allow a better integration). Emscripten imports these functions and makes them available in C, and finally we link all the C code with our COBOL program. COBOL already has good interoperability with C code.

As an example, we implemented the rock-paper-scissors game (https://github.com/cloudflare/cobol-worker/blob/master/src/worker.cob). See the full source (https://github.com/cloudflare/cobol-worker).

Our work can be used by anyone wanting to compile COBOL to WebAssembly; the toolchain we used is available on GitHub (https://github.com/cloudflare/cobaul) and is free to use.

To deploy your own COBOL Worker, you can run the following commands. Make sure that you have wrangler installed on your machine (https://github.com/cloudflare/wrangler).

wrangler generate cobol-worker https://github.com/cloudflare/cobol-worker-template

It will generate a cobol-worker directory containing the Worker. Follow the instructions in your terminal to configure your Cloudflare account with wrangler.

Your worker is ready to go; enter npm run deploy and once deployed the URL will be displayed in the console.

STOP RUN.

I am very grateful to Sven Sauleau for doing the work to make it easy to port a COBOL program into a Workers file and for writing the PROCEDURE DIVISION section above and to Dane Knecht for suggesting Conway’s Game of Life.

Cloudflare Workers with WebAssembly is an easy-to-use serverless platform that’s fast and cheap and scalable. It supports a wide variety of languages--including COBOL (and C, C++, Rust, Go, JavaScript, etc.). Give it a try today.

AFTERWORD

We learnt the other day of the death of John Conway who is well known for Conway’s Game of Life. In tribute to Conway, XKCD dedicated a cartoon:

Cloudflare Workers Now Support COBOL

I decided to implement the Game of Life in COBOL and reproduce the cartoon.

Here’s the code:

IDENTIFICATION DIVISION.
       PROGRAM-ID. worker.
       DATA DIVISION.
       WORKING-STORAGE SECTION.
       01 PARAM-NAME PIC X(7).
       01 PARAM-VALUE PIC 9(10).
       01 PARAM-OUTPUT PIC X(10).
       01 PARAM PIC 9(10) BINARY.
       01 PARAM-COUNTER PIC 9(2) VALUE 0.
       01 DREW PIC 9 VALUE 0.
       01 TOTAL-ROWS PIC 9(2) VALUE 20.
       01 TOTAL-COLUMNS PIC 9(2) VALUE 15.
       01 ROW-COUNTER PIC 9(2) VALUE 0.
       01 COLUMN-COUNTER PIC 9(2) VALUE 0.
       01 OLD-WORLD PIC X(300).
       01 NEW-WORLD PIC X(300).
       01 CELL PIC X(1) VALUE "0".
       01 X PIC 9(2) VALUE 0.
       01 Y PIC 9(2) VALUE 0.
       01 POS PIC 9(3).
       01 ROW-OFFSET PIC S9.
       01 COLUMN-OFFSET PIC S9.
       01 NEIGHBORS PIC 9 VALUE 0.
       PROCEDURE DIVISION.
           CALL "get_http_form" USING "state" RETURNING PARAM.
	   IF PARAM = 1 THEN
	      PERFORM VARYING PARAM-COUNTER FROM 1 BY 1 UNTIL PARAM-COUNTER > 30
	         STRING "state" PARAM-COUNTER INTO PARAM-NAME
	         CALL "get_http_form" USING PARAM-NAME RETURNING PARAM-VALUE
		 COMPUTE POS = (PARAM-COUNTER - 1) * 10 + 1
		 MOVE PARAM-VALUE TO NEW-WORLD(POS:10)
	      END-PERFORM
 	  ELSE
	    MOVE "000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001110000000000001010000000000001010000000000000100000000000101110000000000010101000000000000100100000000001010000000000001010000000000000000000000000000000000000000000000000000000000000000000" TO NEW-WORLD.
           PERFORM PRINT-WORLD.
           MOVE NEW-WORLD TO OLD-WORLD.
           PERFORM VARYING ROW-COUNTER FROM 1 BY 1 UNTIL ROW-COUNTER > TOTAL-ROWS
               PERFORM ITERATE-CELL VARYING COLUMN-COUNTER FROM 1 BY 1 UNTIL COLUMN-COUNTER > TOTAL-COLUMNS
	   END-PERFORM.
	   PERFORM PRINT-FORM.
           STOP RUN.
       ITERATE-CELL.
           PERFORM COUNT-NEIGHBORS.
	   COMPUTE POS = (ROW-COUNTER - 1) * TOTAL-COLUMNS + COLUMN-COUNTER.
           MOVE OLD-WORLD(POS:1) TO CELL.
           IF CELL = "1" AND NEIGHBORS < 2 THEN
               MOVE "0" TO NEW-WORLD(POS:1).
           IF CELL = "1" AND (NEIGHBORS = 2 OR NEIGHBORS = 3) THEN
               MOVE "1" TO NEW-WORLD(POS:1).
           IF CELL = "1" AND NEIGHBORS > 3 THEN
               MOVE "0" TO NEW-WORLD(POS:1).
           IF CELL = "0" AND NEIGHBORS = 3 THEN
               MOVE "1" TO NEW-WORLD(POS:1).
       COUNT-NEIGHBORS.
           MOVE 0 TO NEIGHBORS.
	   PERFORM COUNT-NEIGHBOR
	       VARYING ROW-OFFSET FROM -1 BY 1 UNTIL ROW-OFFSET > 1
	          AFTER COLUMN-OFFSET FROM -1 BY 1 UNTIL COLUMN-OFFSET > 1.
       COUNT-NEIGHBOR.
           IF ROW-OFFSET <> 0 OR COLUMN-OFFSET <> 0 THEN
               COMPUTE Y = ROW-COUNTER + ROW-OFFSET
               COMPUTE X = COLUMN-COUNTER + COLUMN-OFFSET
               IF X >= 1 AND X <= TOTAL-ROWS AND Y >= 1 AND Y <= TOTAL-COLUMNS THEN
	       	   COMPUTE POS = (Y - 1) * TOTAL-COLUMNS + X
                   MOVE OLD-WORLD(POS:1) TO CELL
		   IF CELL = "1" THEN
		      COMPUTE NEIGHBORS = NEIGHBORS + 1.
       PRINT-FORM.
           CALL "append_http_body" USING "<form name=frm1 method=POST><input type=hidden name=state value=".
	   CALL "append_http_body" USING DREW.
	   CALL "append_http_body" USING ">".
	   PERFORM VARYING PARAM-COUNTER FROM 1 BY 1 UNTIL PARAM-COUNTER > 30
    	       CALL "append_http_body" USING "<input type=hidden name=state"
	       CALL "append_http_body" USING PARAM-COUNTER
    	       CALL "append_http_body" USING " value="
	       COMPUTE POS = (PARAM-COUNTER - 1) * 10 + 1
	       MOVE NEW-WORLD(POS:10) TO PARAM-OUTPUT
	       CALL "append_http_body" USING PARAM-OUTPUT
    	       CALL "append_http_body" USING ">"
	   END-PERFORM
           CALL "append_http_body" USING "</form>".
       PRINT-WORLD.
           MOVE 0 TO DREW.
           CALL "set_http_status" USING "200".
	   CALL "append_http_body" USING "<html><body onload='setTimeout(function() { document.frm1.submit() }, 1000)'>"
	   CALL "append_http_body" USING "<style>table { background:-color: white; } td { width: 10px; height: 10px}</style>".
           CALL "append_http_body" USING "<table>".
           PERFORM PRINT-ROW VARYING ROW-COUNTER FROM 3 BY 1 UNTIL ROW-COUNTER >= TOTAL-ROWS - 1.
           CALL "append_http_body" USING "</table></body></html>".
       PRINT-ROW.
           CALL "append_http_body" USING "<tr>".
           PERFORM PRINT-CELL VARYING COLUMN-COUNTER FROM 3 BY 1 UNTIL COLUMN-COUNTER >= TOTAL-COLUMNS - 1.
           CALL "append_http_body" USING "</tr>".
       PRINT-CELL.
	   COMPUTE POS = (ROW-COUNTER - 1) * TOTAL-COLUMNS + COLUMN-COUNTER.
	   MOVE NEW-WORLD(POS:1) TO CELL.
           IF CELL = "1" THEN
	       MOVE 1 TO DREW
               CALL "append_http_body" USING "<td bgcolor=blue></td>".
           IF CELL = "0" THEN
               CALL "append_http_body" USING "<td></td>".

If you want to run your own simulation you can do an HTTP POST with 30 parameters that when concatenated form the layout of the 15x20 world simulated in COBOL.

If you want to install this yourself, take the following steps:

  1. Sign up for Cloudflare
  2. Sign up for a workers.dev subdomain. I've already grabbed cobol.workers.dev, but imagine you’ve managed to grab my-cool-name.workers.dev
  3. Install wrangler, Cloudflare’s CLI for deploying Workers
  4. Create a new COBOL Worker using the template
    wrangler generate cobol-worker https://github.com/cloudflare/cobol-worker-template
    
  5. Configure wrangler.toml to point to your account and set a name for this project, let’s say my-first-cobol.
  6. Grab the files src/index.js and src/worker.cob from my repo here: https://github.com/jgrahamc/game-of-life and replace them in the cobol-worker.
  7. npm run deploy
  8. The COBOL Worker will be running at https://my-first-cobol.my-cool-name.workers.dev/

Wednesday, 15 April

18:28

Cloudflare Dashboard and API Outage on April 15, 2020 [The Cloudflare Blog]

Cloudflare Dashboard and API Outage on April 15, 2020

Starting at 1531 UTC and lasting until 1952 UTC, the Cloudflare Dashboard and API were unavailable because of the disconnection of multiple, redundant fibre connections from one of our two core data centers.

This outage was not caused by a DDoS attack, or related to traffic increases caused by the COVID-19 crisis. Nor was it caused by any malfunction of software or hardware, or any misconfiguration.

What happened

As part of planned maintenance at one of our core data centers, we instructed technicians to remove all the equipment in one of our cabinets. That cabinet contained old inactive equipment we were going to retire and had no active traffic or data on any of the servers in the cabinet. The cabinet also contained a patch panel (switchboard of cables) providing all external connectivity to other Cloudflare data centers. Over the space of three minutes, the technician decommissioning our unused hardware also disconnected the cables in this patch panel.

This data center houses Cloudflare’s main control plane and database and as such, when we lost connectivity, the Dashboard and API became unavailable immediately. The Cloudflare network itself continued to operate normally and proxied customer websites and applications continued to operate. As did Magic Transit, Cloudflare Access, and Cloudflare Spectrum. All security services, such as our Web Application Firewall, continued to work normally.

But the following were not possible:

  • Logging into the Dashboard
  • Using the API
  • Making any configuration changes (such as changing a DNS record)
  • Purging cache
  • Running automated Load Balancing health checks
  • Creating or maintaining Argo Tunnel connections
  • Creating or updating Cloudflare Workers
  • Transferring domains to Cloudflare Registrar
  • Accessing Cloudflare Logs and Analytics
  • Encoding videos on Cloudflare Stream
  • Logging information from edge services (customers will see a gap in log data)

No configuration data was lost as a result of the outage. Our customers’ configuration data is both backed up and replicated off-site, but neither backups nor replicas were needed. All configuration data remained in place.

How we responded

During the outage period, we worked simultaneously to cut over to our disaster recovery core data center and restore connectivity.

Dozens of engineers worked in two virtual war rooms, as Cloudflare is mostly working remotely because of the COVID-19 emergency. One room dedicated to restoring connectivity, the other to disaster recovery failover.

We quickly failed over our internal monitoring systems so that we had visibility of the entire Cloudflare network. This gave us global control and the ability to see issues in any of our network locations in more than 200 cities worldwide. This cutover meant that Cloudflare’s edge service could continue running normally and the SRE team could deal with any problems that arose in the day to day operation of the service.

As we were working the incident, we made a decision every 20 minutes on whether to fail over the Dashboard and API to disaster recovery or to continue trying to restore connectivity. If there had been physical damage to the data center (e.g. if this had been a natural disaster) the decision to cut over would have been easy, but because we had run tests on the failover we knew that the failback from disaster recovery would be very complex and so we were weighing the best course of action as the incident unfolded.

At 1944 UTC the first link from the data center to the Internet came back up. This was a backup link with 10Gbps of connectivity.
At 1951 UTC we restored the first of four large links to the Internet.
At 1952 UTC the Cloudflare Dashboard and API became available.
At 2016 UTC the second of four links was restored.
At 2019 UTC the third of four links was restored.
At 2031 UTC fully-redundant connectivity was restored.

Moving forward

We take this incident very seriously, and recognize the magnitude of impact it had. We have identified several steps we can take to address the risk of these sorts of problems from recurring in the future, and we plan to start working on these matters immediately:

  • Design: While the external connectivity used diverse providers and led to diverse data centers, we had all the connections going through only one patch panel, creating a single physical point of failure. This should be spread out across multiple parts of our facility.
  • Documentation: After the cables were removed from the patch panel, we lost valuable time identifying for data center technicians the critical cables providing external connectivity to be restored. We should take steps to ensure the various cables and panels are labeled for quick identification by anyone working to remediate the problem. This should expedite our ability to access the needed documentation.
  • Process: While sending our technicians instructions to retire hardware, we should call out clearly the cabling that should not be touched.

We will be running a full internal post-mortem to ensure that the root causes of this incident are found and addressed.

We are very sorry for the disruption.

07:03

Offer of Assistance to Governments During COVID-19 [The Cloudflare Blog]

Offer of Assistance to Governments During COVID-19
Offer of Assistance to Governments During COVID-19

As the COVID-19 emergency continues to affect countries and territories around the world, the Internet has been a key factor in providing information to the public. As businesses, organizations and government agencies adjust to this new normal, we recognize the strain that this pandemic has put on the groups working to assist in virus mitigation and provide accurate information to the general public on the state of the pandemic.

At Cloudflare, this means ensuring that these entities have the necessary tools and resources available to them in these extenuating circumstances. On March 13, we announced our Cloudflare for Teams products will be free until September 1, 2020, to ensure Cloudflare users and prospective users have the tools they need to support secure and efficient remote work. Additionally, we have removed usage caps for existing Cloudflare for Teams users and are also providing onboarding sessions so these groups can continue business in this new normal.

As a company, we believe we can do more and have been thinking about ways we can support organizations and businesses that are at the forefront of the pandemic such as health officials and those providing relief to the public. Many organizations have reached out to us with COVID-19 related initiatives including the creation of symptom tracking websites, medical resource donations, and websites focused on providing updates on COVID-19 cases in specific regions.

During this time, we have seen an increase in applications for Project Galileo, an initiative we started in 2014 to provide free services to organizations on the Internet including humanitarian organizations, media sites and voices of political dissent. Project Galileo was started to ensure these groups stay online, as they are repeatedly targeted due to the work they do. Since March 16, we have seen a 40% increase in applications for the project of organizations related to COVID-19 relief efforts and information. We are happy to assist other organizations that have started initiatives such as these with ensuring the accessibility and resilience of their web infrastructure and internal team.

Offer of Assistance to Governments During COVID-19

Risks faced to Government Agencies Web Infrastructure due to COVID-19 pandemic

As COVID-19 has disrupted our lives, the Internet has allowed many aspects of our life to adapt and carry on. From health care, to academia, to sales, a working Internet infrastructure is essential for business continuity and the dissemination of information. At Cloudflare, we’ve witnessed the effects of this transition to online interaction. In the last two months, we have seen both a massive increase in Internet traffic and a shift in the type of content users access online. Government agencies have seen a 100% increase in traffic to their websites during the pandemic.

Offer of Assistance to Governments During COVID-19

This unexpected shift in traffic patterns can come with a cost. Essential websites that provide crucial information and updates on this pandemic may not have configured their systems to handle the massive surges in traffic they are currently seeing. Government agencies providing essential health information to citizens on the COVID-19 pandemic have temporarily gone offline due to increased traffic. We’ve also seen examples of public service announcements and the sites of local governments providing unemployment resources unable to serve their traffic. In New Jersey, New York and Ohio, websites that provide unemployment benefits and health insurance options for people who have recently been laid off have crashed due to large amounts of traffic and unprecedented demand.

Offer of Assistance to Governments During COVID-19
To help process claims for unemployment benefits, New Jersey’s Department of Labor & Workforce Development has created a schedule for applicants.

During the spread of COVID-19, government agencies have also experienced cyberattacks.

The Australian government’s digital platform for providing welfare services for Australian citizens, known as Mygov, was slow and inaccessible for a short period of time. Although a DDoS attack was suspected, the problems were actually the result of 95,000 legitimate requests to access unemployment benefits, as the country recently doubled these benefits to help those impacted by the pandemic.

COVID-19 Government Package

Cloudflare has helped improve the security and performance of many vulnerable entities on the Internet with Project Galileo and ensured the security of government related election agencies with the Athenian Project. Our services are designed not only to prevent malicious actors from disrupting a website, but also to protect large influxes of legitimate traffic. In light of recent events, we want to help state and local government agencies stay online and provide essential information to the public without worrying their site can be taken down by malicious or unexpected spikes in traffic.

Therefore, we are excited to provide a free package of services to state and local governments worldwide until September 1, 2020, to ensure they have the tools needed to secure their web infrastructure and internal teams.

This package of free services includes the following features:

  • Cloudflare Business Level services: Includes unmetered mitigation of DDoS attacks, web application firewall (WAF) with up to 25 custom rulesets, and ability to upload custom SSL certificates.
  • Rate limiting: Rate Limiting allows users to rate limit, shape or block traffic based on the rate of requests per client IP address, cookie, authentication token, or other attributes of the request.
  • Cloudflare for Teams: A suite of tools to help ensure that those working from home can ensure continuity.
    • Access: To ensure the security of internal teams, Cloudflare Access, allows for organizations to secure, authenticate, and monitor user access to any domain, application, or path on Cloudflare, without using a VPN.
    • Gateway: Uses ​DNS filtering to help protect users from phishing scams or malware sites at multiple locations.​

To apply for our COVID-19 government assistance initiative, please visit our website at https://www.cloudflare.com/governmentagency/.

We are also making this offer available for Cloudflare channel partners around the world to help support government agencies in their respective countries during this challenging time for the global community.  If you are a partner and would like information on how to provide Cloudflare for Teams, a Business Plan and Rate Limiting at no charge, please contact your Cloudflare Partner Representative or email partners@cloudflare.com.

What’s Next

The news of COVID-19 has transformed every part of our lives. During this difficult time, the Internet has allowed us to stay connected with friends, family, and provide resources to those in need. At Cloudflare, we are committed to helping businesses, organizations and government agencies stay online to ensure that everyone has access to authoritative information.

02:00

Fedora Origins – Part 01 [Fedora Magazine]

Editor’s comment: The format of this article is different from the usual article that Fedora Magazine has published: a Fedora origins story told from the point of view of a Fedora user. The author has chosen to tell a story, since to simply present the bare facts is akin to just reading the wiki page about it.

Hello World!

Hello, I am… no, I’m not going to give my real name. Let’s say I’m female, probably shorter and older than you. I used to go by the nick of Isadora, more on that later.

Here you have one of the old RH boxes

Now some context. Back in the late ’90s, internet became popular and PCs started to be a thing. However, most people didn’t have either because it was very expensive and often you could do better with the traditional methods. Yes, computers were very basic back then. I used to play with these pocket games that were fascinating at the time, but totally lame now. Monochrome screens with pixelated flat animations. Not going to dive there, just giving an idea how it was.

In the mid-90s a company named Red Hat emerged and slowly started to make a profit of its own by selling its own business-oriented distribution and software utilities. The name comes from one of its founders, Marc Ewing, who used to wear a red lacrosse in university so other students could spot him easily and ask him questions.
Of course, as it was a business-oriented distribution, and I was busy with multiple other things, I didn’t pay much attention to it. It lacked the software I needed and since I wasn’t a customer, I was nobody to ask for additions. However, it was Linux and as such Open Source. People started to package stuff for RHL and put it in repositories. I was invited to join the community project, Fedora.us. I promptly declined, misunderstanding the name. It was the second time I got invited that I asked ‘what is with the “US” there (in the name)?` Another user explained it was ‘us’ as in ‘we’ not as in the ‘United States.’ They explained a bit about how the community worked and I decided to give it a go.

Then my studies got in the way, and I had to shelve it.

Login Screen in Fedora Core

Press Return

By the time I came back to Fedora.us it had changed its name to Fedora Project and was actively being worked on from within Red Hat. Now, I wasn’t there so my direct knowledge of how this happened is a bit foggy. Some say that Fedora existed separately and Red Hat added/invited them, some say that Fedora was completely RH’s idea, some say they existed independently and at some point met or joined. Choose the version you like, I’ll put some links down there so you can know more details and decide for yourself. As far as I’m concerned, they worked together.

Well, as usual someone dropped some CDs with ISOs for me. If I had an euro for every ISO I’ve been offered, or had tossed at my desk, for me to try it, I would be rich. As a matter of fact, I’m not rich but I do have a big rack full of old distros.

Anyways

Now it’s the early 2000s and things have changed dramatically. Computers’ prices have dropped and internet speed is increasing, plus a set of new technologies make it cheaper and more reliable. Computers now can do so much more than just a decade ago, and they’re smaller too. Screens are bigger, with better colors and resolution. Laptops are starting to become popular though still expensive and less powerful than desktop PCs.

During this time, I tried both Fedora and Red Hat. Now, as has been said before, Red Hat focuses on businesses and companies. Their main concern is having exactly the software their customers need, with the features their customers need, delivered as rock solid stability and a reliable update & support cycle. A lot of customization, variety of options and many cool new features are not their main core. More software means more testing and development work and bigger chances of things failing. Yet the technology industry is constantly changing and innovating. Sticking too much to older versions or proven formulas can be fatal for a company.

So what to do? Well, they solved it with Fedora. Fedora Project would be the innovative, looking ahead test bed, and Red Hat Enterprise Linux was the more conservative, rock solid operating system for businesses. Yes, they changed the name from Red Hat Linux to Red Hat Enterprise Linux. Sounds better, doesn’t it?

Unsurprisingly, Fedora had a fame of being difficult, unstable and for “hackers only”. Whenever I said I was using Fedora, they would give me odd looks or say something like “I want something stable” or “I’m not into that” (meaning they didn’t fancy programming/hacking activities). Countless individuals suggested I might want to use one of the other, beginner-friendly distributions, without themselves even giving Fedora a try! Many would disregard Linux as a whole as an amateur thing, only valid for playing but not good for serious work and companies. To each their own, I suppose.

Note the F and the bubble already there

Yes, but why?

Those early versions were called Fedora Core and had a very uncertain release pattern. The six months cycle came much later. Fedora Core got its name because there were two repositories, Core and Extras. Core had the essentials, so to speak, and was maintained by Red Hat. Extras was, well, everything else. Any software that most users would want or need was included there, and it was maintained by a wide range of contributors.

From the beginning, one of the most powerful reasons for me to use it was the community and its core values. The Four Foundations of Fedora, Freedom, Features, First & Friends were lived and breathed and not just a catchy line on a website or a leaflet. Fedora Project strove (and still does) to deliver the newest features first, caring for freedom (of choice and software) and keeping a good open community, making friends as we contribute to the project.

I also liked the fact that Fedora, as its purpose was testing for Red Hat, delivered a lot of new software and technologies; it was like opening the window to see the future today.

The downside was its unreliable upgrade cycle. You could get a new version in a few months or next year… nobody knew, there was no agreed schedule.

Note how, despite being Fedora, RH’s logo and signature is omnipresent

What was in the box

Fedora Core kept this name up to the sixth version. From the start, it was meant to be a distribution you could use right after installing it, so it came with Gnome 2, KDE 3, OpenOffice and some browser I forgot, possibly Firefox.

I remember it being the first to introduce SELinux and SystemD by default, and to replace LILO with GRUB. I also remember the hardware requirements were something at the time, although they now sound laughable: Pentium II 400MHz, 256MB RAM (yes, you read it right) and 2GB of space in disk. It even had an option for terminal only! This would require only 64MB RAM and Pentium II 200MHz. Amazing, isn’t it?

It had codenames. Not publicly, but it had, and they were quite peculiar. Fedora Core 1 was code named «Yarrow» which is a medium size plant with yellow or white crown-like flowers. Core 2 was Tettnang which is a small town in Baden-Württemberg, Germany. Not sure about Core 3, I think it was Heidelberg, but maybe I’m mixing with later releases. Core 4 was Stentz, if I recall correctly (no idea what it means), Core 5 was a colour, I think Bordeaux, and Core 6 was Zod that I think it was a comic character but I could be wrong. If there was a method in their madness I have no idea. I thought the names amusing but didn’t give a second thought to it as they didn’t affect anything, not even the design of each release.

Ah… good ol` genetic helix

So what now?

Well, of course, Fedora Project has evolved from where we have stopped. But that’s for later articles or this one will be too long. For now, I leave you with an extract of an interview with Matthew Miller, current Project Leader and some links in case you want to know more.

Extracts to interview with Matthew Miller, Project Leader.

Matthew Miller tells about the beginnings in Eduard Lucena’s podcast (transcription here): “Fedora started about 15 years ago, really. It actually started as a thing called Fedora.us.” Back in those days, there was Red Hat Linux.” “Meanwhile, there was this thing called Fedora.us which was basically a project to make additional software available to users of Red Hat Linux. Find things that weren’t part of Red Hat Linux, and package them up, and make them available to everybody. That was started as a community project.”

“Red Hat (then) merged with this Fedora.us project to form Fedora Project that produces an upstream operating system that Red Hat Enterprise Linux is derived from but then moves on a slower pace.”

“We were then two parts, Fedora Core, which was basically inherited from the old Red Hat Linux and only Red Hat employees could do anything with and then Fedora Extras, where community could come together to add things on top of that Fedora Core. It took a little while to get off the ground but it was fairly successful”

Around the time of Fedora Core 6, those were actually merged together into one big Fedora where all of the packages were all part of the same thing. There was no more distinction of Core and Extras, and everything was all together and, more importantly, all the community was all together.

They invited the community to take ownership of the whole thing and for Red Hat to become part of the community rather than separate. That was a huge success.”

Links of interest

Fedora, a visual history
https://www.phoronix.com/scan.php?page=article&item=678&num=1

Red Hat Videos – Fedora’s anniversary
https://youtu.be/DOFXBGh6DZ0

Red Hat Videos – Default to open
https://youtu.be/vhYMRtqvMg8

Fedora’s Mission & Foundations
https://docs.fedoraproject.org/en-US/project/

A short history of Fedora
https://youtu.be/NlNlcLD2zRM

Tuesday, 14 April

05:00

Comparing HTTP/3 vs. HTTP/2 Performance [The Cloudflare Blog]

Comparing HTTP/3 vs. HTTP/2 Performance
Comparing HTTP/3 vs. HTTP/2 Performance

We announced support for HTTP/3, the successor to HTTP/2 during Cloudflare’s birthday week last year. Our goal is and has always been to help build a better Internet. Collaborating on standards is a big part of that, and we're very fortunate to do that here.

Even though HTTP/3 is still in draft status, we've seen a lot of interest from our users. So far, over 113,000 zones have activated HTTP/3 and, if you are using an experimental browser those zones can be accessed using the new protocol! It's been great seeing so many people enable HTTP/3: having real websites accessible through HTTP/3 means browsers have more diverse properties to test against.

When we launched support for HTTP/3, we did so in partnership with Google, who simultaneously launched experimental support in Google Chrome. Since then, we've seen more browsers add experimental support: Firefox to their nightly builds, other Chromium-based browsers such as Opera and Microsoft Edge through the underlying Chrome browser engine, and Safari via their technology preview. We closely follow these developments and partner wherever we can help; having a large network with many sites that have HTTP/3 enabled gives browser implementers an excellent testbed against which to try out code.

So, what's the status and where are we now?

The IETF standardization process develops protocols as a series of document draft versions with the ultimate aim of producing a final draft version that is ready to be marked as an RFC. The members of the QUIC Working Group collaborate on analyzing, implementing and interoperating the specification in order to find things that don't work quite right. We launched with support for Draft-23 for HTTP/3 and have since kept up with each new draft, with 27 being the latest at time of writing. With each draft the group improves the quality of the QUIC definition and gets closer to "rough consensus" about how it behaves. In order to avoid a perpetual state of analysis paralysis and endless tweaking, the bar for proposing changes to the specification has been increasing with each new draft. This means that changes between versions are smaller, and that a final RFC should closely match the protocol that we've been running in production.

Benefits

One of the main touted advantages of HTTP/3 is increased performance, specifically around fetching multiple objects simultaneously. With HTTP/2, any interruption (packet loss) in the TCP connection blocks all streams (Head of line blocking). Because HTTP/3 is UDP-based, if a packet gets dropped that only interrupts that one stream, not all of them.

In addition, HTTP/3 offers 0-RTT support, which means that subsequent connections can start up much faster by eliminating the TLS acknowledgement from the server when setting up the connection. This means the client can start requesting data much faster than with a full TLS negotiation, meaning the website starts loading earlier.

The following illustrates the packet loss and its impact: HTTP/2 multiplexing two requests . A request comes over HTTP/2 from the client to the server requesting two resources (we’ve colored the requests and their associated responses green and yellow). The responses are broken up into multiple packets and, alas, a packet is lost so both requests are held up.

Comparing HTTP/3 vs. HTTP/2 Performance
Comparing HTTP/3 vs. HTTP/2 Performance

The above shows HTTP/3 multiplexing 2 requests. A packet is lost that affects the yellow response but the green one proceeds just fine.

Improvements in session startup mean that ‘connections’ to servers start much faster, which means the browser starts to see data more quickly. We were curious to see how much of an improvement, so we ran some tests. To measure the improvement resulting from 0-RTT support, we ran some benchmarks measuring time to first byte (TTFB). On average, with HTTP/3 we see the first byte appearing after 176ms. With HTTP/2 we see 201ms, meaning HTTP/3 is already performing 12.4% better!

Comparing HTTP/3 vs. HTTP/2 Performance

Interestingly, not every aspect of the protocol is governed by the drafts or RFC. Implementation choices can affect performance, such as efficient packet transmission and choice of congestion control algorithm. Congestion control is a technique your computer and server use to adapt to overloaded networks: by dropping packets, transmission is subsequently throttled. Because QUIC is a new protocol, getting the congestion control design and implementation right requires experimentation and tuning.

In order to provide a safe and simple starting point, the Loss Detection and Congestion Control specification recommends the Reno algorithm but allows endpoints to choose any algorithm they might like.  We started with New Reno but we know from experience that we can get better performance with something else. We have recently moved to CUBIC and on our network with larger size transfers and packet loss, CUBIC shows improvement over New Reno. Stay tuned for more details in future.

For our existing HTTP/2 stack, we currently support BBR v1 (TCP). This means that in our tests, we’re not performing an exact apples-to-apples comparison as these congestion control algorithms will behave differently for smaller vs larger transfers. That being said, we can already see a speedup in smaller websites using HTTP/3 when compared to HTTP/2. With larger zones, the improved congestion control of our tuned HTTP/2 stack shines in performance.

For a small test page of 15KB, HTTP/3 takes an average of 443ms to load compared to 458ms for HTTP/2. However, once we increase the page size to 1MB that advantage disappears: HTTP/3 is just slightly slower than HTTP/2 on our network today, taking 2.33s to load versus 2.30s.

Comparing HTTP/3 vs. HTTP/2 Performance
Comparing HTTP/3 vs. HTTP/2 Performance
Comparing HTTP/3 vs. HTTP/2 Performance

Synthetic benchmarks are interesting, but we wanted to know how HTTP/3 would perform in the real world.

To measure, we wanted a third party that could load websites on our network, mimicking a browser. WebPageTest is a common framework that is used to measure the page load time, with nice waterfall charts. For analyzing the backend, we used our in-house Browser Insights, to capture timings as our edge sees it. We then tied both pieces together with bits of automation.

As a test case we decided to use this very blog for our performance monitoring. We configured our own instances of WebPageTest spread over the world to load these sites over both HTTP/2 and HTTP/3. We also enabled HTTP/3 and Browser Insights. So, every time our test scripts kickoff a webpage test with an HTTP/3 supported browser loading the page, browser analytics report the data back. Rinse and repeat for HTTP/2 to be able to compare.

The following graph shows the page load time for a real world page -- blog.cloudflare.com, to compare the performance of HTTP/3 and HTTP/2. We have these performance measurements running from different geographical locations.

Comparing HTTP/3 vs. HTTP/2 Performance

As you can see, HTTP/3 performance still trails HTTP/2 performance, by about 1-4% on average in North America and similar results are seen in Europe, Asia and South America. We suspect this could be due to the difference in congestion algorithms: HTTP/2 on BBR v1 vs. HTTP/3 on CUBIC. In the future, we’ll work to support the same congestion algorithm on both to get a more accurate apples-to-apples comparison.

Conclusion

Overall, we’re very excited to be allowed to help push this standard forward. Our implementation is holding up well, offering better performance in some cases and at worst similar to HTTP/2. As the standard finalizes, we’re looking forward to seeing browsers add support for HTTP/3 in mainstream versions. As for us, we continue to support the latest drafts while at the same time looking for more ways to leverage HTTP/3 to get even better performance, be it congestion tuning, prioritization or system capacity (CPU and raw network throughput).

In the meantime, if you’d like to try it out, just enable HTTP/3 on our dashboard and download a nightly version of one of the major browsers. Instructions on how to enable HTTP/3 can be found on our developer documentation.

Monday, 13 April

13:06

Cloudflare for SSH, RDP and Minecraft [The Cloudflare Blog]

Cloudflare for SSH, RDP and Minecraft
Cloudflare for SSH, RDP and Minecraft

Almost exactly two years ago, we launched Cloudflare Spectrum for our Enterprise customers. Today, we’re thrilled to extend DDoS protection and traffic acceleration with Spectrum for SSH, RDP, and Minecraft to our Pro and Business plan customers.

When we think of Cloudflare, a lot of the time we think about protecting and improving the performance of websites. But the Internet is so much more, ranging from gaming, to managing servers, to cryptocurrencies. How do we make sure these applications are secure and performant?

With Spectrum, you can put Cloudflare in front of your SSH, RDP and Minecraft services, protecting them from DDoS attacks and improving network performance. This allows you to protect the management of your servers, not just your website. Better yet, by leveraging the Cloudflare network you also get increased reliability and increased performance: lower latency!

Remote access to servers

While access to websites from home is incredibly important, being able to remotely manage your servers can be equally critical. Losing access to your infrastructure can be disastrous: people need to know their infrastructure is safe and connectivity is good and performant. Usually, server management is done through SSH (Linux or Unix based servers) and RDP (Windows based servers). With these protocols, performance and reliability are key: you need to know you can always reliably manage your servers and that the bad guys are kept out. What's more, low latency is really important. Every time you type a key in an SSH terminal or click a button in a remote desktop session, that key press or button click has to traverse the Internet to your origin before the server can process the input and send feedback. While increasing bandwidth can help, lowering latency can help even more in getting your sessions to feel like you're working on a local machine and not one half-way across the globe.

All work and no play makes Jack Steve a dull boy

While we stay at home, many of us are also looking to play and not only work. Video games in particular have seen a huge increase in popularity. As personal interaction becomes more difficult to come by, Minecraft has become a popular social outlet. Many of us at Cloudflare are using it to stay in touch and have fun with friends and family in the current age of quarantine. And it’s not just employees at Cloudflare that feel this way, we’ve seen a big increase in Minecraft traffic flowing through our network. Traffic per week had remained steady for a while but has more than tripled since many countries have put their citizens in lockdown:

Cloudflare for SSH, RDP and Minecraft

Minecraft is a particularly popular target for DDoS attacks: it's not uncommon for people to develop feuds whilst playing the game. When they do, some of the more tech-savvy players of this game opt to take matters into their own hands and launch a (D)DoS attack, rendering it unusable for the duration of the attacks. Our friends at Hypixel and Nodecraft have known this for many years, which is why they’ve chosen to protect their servers using Spectrum.

While we love recommending their services, we realize some of you prefer to run your own Minecraft server on a VPS (virtual private server like a DigitalOcean droplet) that you maintain. To help you protect your Minecraft server, we're providing Spectrum for Minecraft as well, available on Pro and Business plans. You'll be able to use the entire Cloudflare network to protect your server and increase network performance.

How does it work?

Configuring Spectrum is easy, just log into your dashboard and head on over to the Spectrum tab. From there you can choose a protocol and configure the IP of your server:

Cloudflare for SSH, RDP and Minecraft

After that all you have to do is use the subdomain you configured to connect instead of your IP. Traffic will be proxied using Spectrum on the Cloudflare network, keeping the bad guys out and your services safe.

Cloudflare for SSH, RDP and Minecraft

So how much does this cost? We're happy to announce that all paid plans will get access to Spectrum for free, with a generous free data allowance. Pro plans will be able to use SSH and Minecraft, up to 5 gigabytes for free each month. Biz plans can go up to 10 gigabytes for free and also get access to RDP. After the free cap you will be billed on a per gigabyte basis.

Spectrum is complementary to Access: it offers DDoS protection and improved network performance as a 'drop-in' product, no configuration necessary on your origins. If you want more control over who has access to which services, we highly recommend taking a look at Cloudflare for Teams.

We're very excited to extend Cloudflare's services to not just HTTP traffic, allowing you to protect your core management services and Minecraft gaming servers. In the future, we'll add support for more protocols. If you have a suggestion, let us know! In the meantime, if you have a Pro or Business account, head on over to the dashboard and enable Spectrum today!

09:07

Saturday Morning Breakfast Cereal - Covid Explainer [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I've been reading 538 since it was a weird little blog by a lone baseball dork, so this is pretty damn cool.


Today's News:

05:00

Helping sites get back online: the origin monitoring intern project [The Cloudflare Blog]

Helping sites get back online: the origin monitoring intern project
Helping sites get back online: the origin monitoring intern project

The most impactful internship experiences involve building something meaningful from scratch and learning along the way. Those can be tough goals to accomplish during a short summer internship, but our experience with Cloudflare’s 2019 intern program met both of them and more! Over the course of ten weeks, our team of three interns (two engineering, one product management) went from a problem statement to a new feature, which is still working in production for all Cloudflare customers.

The project

Cloudflare sits between customers’ origin servers and end users. This means that all traffic to the origin server runs through Cloudflare, so we know when something goes wrong with a server and sometimes reflect that status back to users. For example, if an origin is refusing connections and there’s no cached version of the site available, Cloudflare will display a 521 error. If customers don’t have monitoring systems configured to detect and notify them when failures like this occur, their websites may go down silently, and they may hear about the issue for the first time from angry users.

Helping sites get back online: the origin monitoring intern project
When a customer’s origin server is unreachable, Cloudflare sends a 5xx error back to the visitor.‌‌

This problem became the starting point for our summer internship project: since Cloudflare knows when customers' origins are down, let’s send them a notification when it happens so they can take action to get their sites back online and reduce the impact to their users! This work became Cloudflare’s passive origin monitoring feature, which is currently available on all Cloudflare plans.

Over the course of our internship, we ran into lots of interesting technical and product problems, like:

Making big data small

Working with data from all requests going through Cloudflare’s 26 million+ Internet properties to look for unreachable origins is unrealistic from a data volume and performance perspective. Figuring out what datasets were available to analyze for the errors we were looking for, and how to adapt our whiteboarded algorithm ideas to use this data, was a challenge in itself.

Ensuring high alert quality

Because only a fraction of requests show up in the sampled timing and error dataset we chose to use, false positives/negatives were disproportionately likely to occur for low-traffic sites. These are the sites that are least likely to have sophisticated monitoring systems in place (and therefore are most in need of this feature!). In order to make the notifications as accurate and actionable as possible, we analyzed patterns of failed requests throughout different types of Cloudflare Internet properties. We used this data to determine thresholds that would maximize the number of true positive notifications, while making sure they weren’t so sensitive that we end up spamming customers with emails about sporadic failures.

Designing actionable notifications

Cloudflare has lots of different kinds of customers, from people running personal blogs with interest in DDoS mitigation to large enterprise companies with extremely sophisticated monitoring systems and global teams dedicated to incident response. We wanted to make sure that our notifications were understandable and actionable for people with varying technical backgrounds, so we enabled the feature for small samples of customers and tested many variations of the “origin monitoring email”. Customers responded right back to our notification emails, sent in support questions, and posted on our community forums. These were all great sources of feedback that helped us improve the message’s clarity and actionability.

We frontloaded our internship with lots of research (both digging into request data to understand patterns in origin unreachability problems and talking to customers/poring over support tickets about origin unreachability) and then spent the next few weeks iterating. We enabled passive origin monitoring for all customers with some time remaining before the end of our internships, so we could spend time improving the supportability of our product, documenting our design decisions, and working with the team that would be taking ownership of the project.

We were also able to develop some smaller internal capabilities that built on the work we’d done for the customer-facing feature, like notifications on origin outage events for larger sites to help our account teams provide proactive support to customers. It was super rewarding to see our work in production, helping Cloudflare users get their sites back online faster after receiving origin monitoring notifications.

Our internship experience

The Cloudflare internship program was a whirlwind ten weeks, with each day presenting new challenges and learnings! Some factors that led to our productive and memorable summer included:

A well-scoped project

It can be tough to find a project that’s meaningful enough to make an impact but still doable within the short time period available for summer internships. We’re grateful to our managers and mentors for identifying an interesting problem that was the perfect size for us to work on, and for keeping us on the rails if the technical or product scope started to creep beyond what would be realistic for the time we had left.

Working as a team of interns

The immediate team working on the origin monitoring project consisted of three interns: Annika in product management and Ilya and Zhengyao in engineering. Having a dedicated team with similar goals and perspectives on the project helped us stay focused and work together naturally.

Quick, agile cycles

Since our project faced strict time constraints and our team was distributed across two offices (Champaign and San Francisco), it was critical for us to communicate frequently and work in short, iterative sprints. Daily standups, weekly planning meetings, and frequent feedback from customers and internal stakeholders helped us stay on track.

Great mentorship & lots of freedom

Our managers challenged us, but also gave us room to explore our ideas and develop our own work process. Their trust encouraged us to set ambitious goals for ourselves and enabled us to accomplish way more than we may have under strict process requirements.

After the internship

In the last week of our internships, the engineering interns, who were based in the Champaign, IL office, visited the San Francisco office to meet with the team that would be taking over the project when we left and present our work to the company at our all hands meeting. The most exciting aspect of the visit: our presentation was preempted by Cloudflare’s co-founders announcing public S-1 filing at the all hands! :)

Helping sites get back online: the origin monitoring intern project
Helping sites get back online: the origin monitoring intern project

Over the next few months, Cloudflare added a notifications page for easy configurability and announced the availability of passive origin monitoring along with some other tools to help customers monitor their servers and avoid downtime.

Ilya is working for Cloudflare part-time during the school semester and heading back for another internship this summer, and Annika is joining the team full-time after graduation this May. We’re excited to keep working on tools that help make the Internet a better place!

Also, Cloudflare is doubling the size of the 2020 intern class—if you or someone you know is interested in an internship like this one, check out the open positions in software engineering, security engineering, and product management.

02:00

How to contribute to Folding@home on Fedora [Fedora Magazine]

What is Folding@home?

Folding@home is a distributed computing network for performing biomedical research. Its intent is to help further understand and develop cures for a range of diseases. Their current priority is understanding the behavior of COVID-19 and the virus that causes COVID-19. This article will show you how you can get involved by donating your computer’s idle time.

Sounds cool, how do I help?

In order to donate your computational power to Folding@home, download the FAHClient package from this page. Once you’ve downloaded the package, open your Downloads folder and double click it to open. For instance, on standard Fedora Workstation, this opens GNOME Software, which prompts you to install the package.

Click install and enter your password to continue from here.

How to start Folding@home

Folding@home starts folding as soon as it is installed. In order to control how much CPU/GPU is using you must open the web control interface, available here.

The interface contains information about what project you are contributing to. In order to track “points,” the scoring system of Folding@home, you must set up a user account with Folding@home.

Tracking your work

Now that everything’s done, you may be wondering how you can track the work your computer is doing. All you need to is request a passkey from this page. Enter your email and your desired username. Once you have received the passkey in email, you can enter that into the client settings.

Click on the Change Identity button, and this page appears:

You can also put in a team number here like I have. This allows your points to go towards a group that you support.

Enter the username you gave when you requested a passkey, and then enter the passkey you received.

What next?

That’s all there is to it. Folding@home runs in the background automatically on startup. If you need to pause or lower how much CPU/GPU power it uses, you can change that via the web interface linked above.

You may notice that you don’t receive many work units. That’s because there is currently a shortage of work units to distribute due to a spike of computers being put onto the network. However, different efforts are emerging all the time.

You can visually see the spike in computers on the network from last year at the same time to 4/4/2020

Photo by Joshua Sortino on Unsplash.

Sunday, 12 April

Saturday, 11 April

05:00

Remote work, regional lockdowns and migration of Internet usage [The Cloudflare Blog]

Remote work, regional lockdowns and migration of Internet usage

The recommendation for social distancing to slow down the spread of COVID-19 has led many companies to adopt a work-from-home policy for their employees in offices around the world, and Cloudflare is no exception.

As a result, a large portion of Internet access shifted from office-focused areas, like city centers and business parks, towards more residential areas like suburbs and outlying towns. We wanted to find out just precisely how broad this geographical traffic migration was, and how different locations were affected by it.

It turns out it is substantial, and the results are quite stunning:

Remote work, regional lockdowns and migration of Internet usage

Gathering the Data

So how can we determine if Internet usage patterns have changed from a geographical perspective?

In each Cloudflare Point of Presence (in more than 200 cities worldwide) there's an edge router whose responsibility it is to switch Internet traffic to serve the requests of end users in the region.

These edge routers are the network's entry point and for monitoring and debugging purposes each router samples IP packet information regarding the traffic that traverses them. This data is collected as flow records and contains layer-3 related information, such as the source and destination IP address, port, packet size etc.

These statistical samples allow us to monitor aggregate traffic information, spot unusual activity, and verify that our connections to the greater Internet are working correctly.

By using a geolocation database like Maxmind, we can determine the rough location from which a flow emanates. If the geographic data is too imprecise it is excluded from our set to reduce noise.

By merging together information about flows that come from similar areas, we can infer a geospatial distribution of traffic volume across the globe.

Visual Representation

This distribution of traffic volume is an indication of Internet usage patterns. By plotting a geospatial heatmap of the Internet traffic in New York City for two separate dates four weeks apart (February 19, 2020 and March 18, 2020), we can already notice an erosion of traffic volume from the city center dissipating into the surrounding areas.

All the charts in this blog post show day time traffic in each geography.  This is done to show the difference working from home is having on Internet traffic.

Remote work, regional lockdowns and migration of Internet usage
February 19, 2020
Remote work, regional lockdowns and migration of Internet usage
March 18, 2020

But the migration pattern really jumps forward when producing a differential heatmap of these two dates:

Remote work, regional lockdowns and migration of Internet usage

This chart shows a diff of the “before” and “after” scenarios, highlighting only differences in volume between the two dates. The red color represents areas where Internet usage has decreased since February 19, and green where it has increased.

It strongly suggests a migration of Internet usage towards suburban and residential areas.

The data used is comparing working hours periods (1000 to 1600 local time) between two Wednesdays four weeks apart (February 19 and March 18).

Cities

We produced  similar visual representations for other impacted locations around the world. All exhibit similar migration patterns.

Seattle, US

Remote work, regional lockdowns and migration of Internet usage

Additional features can be picked out from the chart. The red dot in Silverdale, WA appears to be Internet access from the Kitsap Mall (which is currently closed). And Sea-Tac Airport is seen as a red area between Burien and Renton.

San Francisco Bay Area, US

Remote work, regional lockdowns and migration of Internet usage

Areas where large Silicon Valley businesses have sent employees home are clearly visible.

Paris, France

Remote work, regional lockdowns and migration of Internet usage

The red area in north east Paris appears to be the airport at Le Bourget and the surrounding industrial zone. To the west there’s another red area: Le Château de Versailles.

Berlin, Germany

Remote work, regional lockdowns and migration of Internet usage

In Berlin a red dot near the Tegeler See is Flughafen Berlin-Tegel likely resulting from fewer passengers passing through the airport.

Network Impact

As shown above, geographical Internet usage changes are visible from Internet traffic exchanged between end users and Cloudflare at various locations around the globe.

Besides the geographical migration in various metropolitan areas, the overall volume of traffic in these locations has also increased between 10% and 40% in just a period of four weeks.

It’s interesting to reflect that the Internet was originally conceived as a communications network for humanity during a crisis, and it’s come a long way since then. But in this moment of crisis, it’s being put to use for that original purpose. The Internet was built for this.

Cloudflare helps power a substantial portion of Internet traffic. During this time of increased strain on networks around the world, our team is working to ensure our service continues to run smoothly.

We are also providing our Cloudflare for Teams products at no cost to companies of any size struggling to support their employees working from home: learn more.