Tuesday, 20 August

02:00

GNOME 3.34 Works Out Refined XWayland Support For X11 Apps Run Under Sudo [Phoronix]

GNOME 3.34 continues to look like an incredibly great release in the performance department as well as for Wayland users...

01:00

How SpaceX Plans To Move Starship From Cocoa Site To Kennedy Space Center [Slashdot]

New submitter RhettLivingston writes: Real plans for the move of Starship Mk 2 from its current construction site in Cocoa to the Kennedy Space Center have finally emerged. A News 6 Orlando report identifies permit applications and observed preparations for the move,which will take a land and sea route. Barring some remarkably hasty road compaction and paving, the prototype will start its journey off-road, crossing a recently cleared path through vacant land to reach Grissom Parkway. It will then travel east in the westbound lanes of SR 528 for a short distance before loading to a barge in the Indian river via a makeshift dock. The rest of the route is relatively conventional, including offloading at KSC at the site previously used for delivery of the Space Shuttle's external fuel tanks. Given the recent construction of new facilities at the current construction site, it is likely that this will not be the last time this route is utilized. SpaceX declined to say how the company will transport the spacecraft or when the relocation will occur. SpaceX's "Mk2" orbital Starship prototype is designed to test out the technologies and basic design of the final Starship vehicle -- a giant passenger spacecraft that SpaceX is making to take people to the Moon and Mars.

Read more of this story at Slashdot.

00:57

IBM, Intel tease 2020's specialist chips: Power9 'bandwidth beast' – and Spring Crest Nervana neural-net processor [The Register]

Plus, Cerebras hypes up AI-focused '400,000-core die the size of an iPad'

Hot Chips  At the Hot Chips symposium in Silicon Valley on Monday, IBM and Intel each revealed a few more details about some upcoming processors of theirs.…

Monday, 19 August

22:50

RADV Vulkan Driver Lands Renoir APU Support In Time For Mesa 19.2 [Phoronix]

Just hours ahead of the Mesa 19.2 feature freeze and days after the RadeonSI OpenGL driver added Renoir support, the RADV Vulkan driver has picked up support for this next-gen Zen 2 + Vega APU...

22:06

DragonFlyBSD Developing DSynth As Synth Rewrite For Custom Package Building [Phoronix]

Adding to another creation being worked on by DragonFlyBSD lead developer Matthew Dillon, DSynth is a C rewrite of the FreeBSD originating Synth program that serves as a custom package repository builder...

21:30

Scientists Are 99 Percent Sure They Just Detected a Black Hole Eating a Neutron Star [Slashdot]

An anonymous reader quotes a report from Motherboard: On Wednesday, a gravitational wave called S190814bv was detected by the U.S.-based Laser Interferometer Gravitational-Wave Observatory (LIGO) and its Italian counterpart Virgo. Based on its known properties, scientists think there is a 99% probability that the source of the wave is a black hole that ate a neutron star. In contrast to black hole mergers, neutron star collisions do produce a lot of light. When a gravitational wave from a neutron star crash was detected in 2017, scientists were able to pinpoint bright emissions from the event -- called an optical counterpart -- in the days that followed the wave detection. This marked the dawn of a technique called "multi-messenger astronomy," in which scientists use multiple types of signals from space to examine astronomical objects. Ryan Foley, an astronomer at UC Santa Cruz, was part of the team that tracked down that first optical counterpart, a feat that has not yet been repeated. He and his colleagues are currently scanning the skies with telescopes, searching for any light that might have been radiated by the new suspected merger of a black hole and neutron star. If the team were to pick up light from the event within the coming weeks, they would be witnessing the fallout of a black hole spilling a neutron star's guts while devouring it. This would provide a rare glimpse of the exotic properties of these extreme astronomical objects and could shed light on everything from subatomic physics to the expansion rate of the universe. "We've never detected a neutron star and a black hole together," said Foley. "If it turns out to be right, then we've confirmed a new type of star system. It's that fundamental." He added: "If you learn about how neutron stars are built, that can tell you about how atoms are built. This is something that is fundamental to everything in our daily life works."

Read more of this story at Slashdot.

20:02

Jaguar and Audi SUVs Fail To Dent Tesla's Electric-Car Dominance [Slashdot]

Tesla has managed to expand its electric-car marketshare, despite two new battery-powered luxury SUVs that have been in U.S. showrooms for the last 10 months: Jaguar's I-Pace and Audi's e-tron. Bloomberg reports: Their starts are the latest indications that legacy automakers aren't assured instant success when they roll out new plug-in models. Tesla's Model S and X have largely held its own against the two crossovers that offer shorter range and less plentiful public charging infrastructure. Jaguar and Audi also lack the cool factor Musk has cultivated for the Tesla brand by taking an aggressive approach to autonomy and using over-the-air software updates to add games and entertainment features. Tesla's Model X and Model S each boast more than 300 miles of range, and the cheaper Model 3 travels 240 miles between charges. Jaguar's $69,500 I-Pace is rated at 234 miles, and Audi's $74,800 e-tron registers 204 miles. Jaguar's marketing team spent years laying the groundwork to introduce the I-Pace. In 2016, the brand joined Formula E, an open-wheeled, electric-powered race circuit similar to Formula One. Porsche and Mercedes-Benz are also joining Formula E for the 2019-2020 season to help generate buzz for the new all-electric models they have coming out. The circuit makes stops in cities including New York, Hong Kong and London, which the brands are banking on as major markets for plug-in cars. But while Formula E is drawing crowds of urban dwellers and a substantial audience on social media, all that buzz may not necessarily translate into showroom traffic.

Read more of this story at Slashdot.

19:58

Breaker, breaker. Apple's iOS 12.4 update breaks jailbreak break, un-breaks the break. 10-4 [The Register]

File under: 'Breaking' news

iPhone hackers have discovered Apple's most recent iOS update, 12.4, released in July, accidentally reopened a code-execution vulnerability that was previously patched – a vulnerability that can be abused to jail-break iThings.…

19:25

Pentagon Conducts First Test of Previously Banned Missile [Slashdot]

The U.S. military has conducted a flight test of a type of missile banned for more than 30 years by a treaty that both the United States and Russia abandoned this month, the Pentagon said. The Associated Press reports: The test off the coast of California on Sunday marked the resumption of an arms competition that some analysts worry could increase U.S.-Russian tensions. The Trump administration has said it remains interested in useful arms control but questions Moscow's willingness to adhere to its treaty commitments. The Pentagon said it tested a modified ground-launched version of a Navy Tomahawk cruise missile, which was launched from San Nicolas Island and accurately struck its target after flying more than 500 kilometers (310 miles). The missile was armed with a conventional, not nuclear, warhead. Defense officials had said last March that this missile likely would have a range of about 1,000 kilometers (620 miles) and that it might be ready for deployment within 18 months. The missile would have violated the Intermediate-range Nuclear Forces (INF) Treaty of 1987, which banned all types of missiles with ranges between 500 kilometers (310 miles) and 5,500 kilometers (3,410 miles). The U.S. and Russia withdrew from the treaty on Aug. 2, prompted by what the administration said was Russia's unwillingness to stop violating the treaty's terms. Russia accused the U.S. of violating the agreement. The Pentagon says it also intends to begin testing, probably before the end of this year, an INF-range ballistic missile with a range of roughly 3,000 kilometers (1,864 miles) to 4,000 kilometers (2,485 miles).

Read more of this story at Slashdot.

18:45

Twitter Blocks State-Controlled Media Outlets From Advertising On Its Social Network [Slashdot]

Twitter is now blocking state-run media outlets from advertising on its platform. The new policy was announced just hours after the company was criticized for running promoted tweets by China's largest state agency that paint pro-democracy demonstrations in Hong Kong as violent, even though the rallies, including one that drew an estimated 1.7 million people this weekend, have been described as mostly peaceful by international media. TechCrunch reports: State-funded media enterprises that do not rely on taxpayer dollars for their financing and don't operate independently of the governments that finance them will no longer be allowed to advertise on the platform, Twitter said in a statement. That leaves a big exception for outlets like the Associated Press, the British Broadcasting Corp., Public Broadcasting Service and National Public Radio, according to reporting from BBC reporter, Dave Lee. The affected accounts will be able to use Twitter, but can't access the company's advertising products, Twitter said in a statement. The policy applies to news media outlets that are financially or editorially controlled by the state, Twitter said. The company said it will make its policy determinations on the basis of media freedom and independence, including editorial control over articles and video, the financial ownership of the publication, the influence or interference governments may exert over editors, broadcasters and journalists, and political pressure or control over the production and distribution process. Twitter said the advertising rules wouldn't apply to entities that are focused on entertainment, sports or travel, but if there's news in the mix, the company will block advertising access. Affected outlets have 30 days before they're removed from Twitter and the company is halting all existing campaigns.

Read more of this story at Slashdot.

18:13

Mesa 19.2's Feature Freeze / Release Candidate Process Beginning Tomorrow [Phoronix]

Mesa 19.2 was supposed to be branched marking its feature freeze two weeks ago on 6 August along with the issuing of the first release candidate. That milestone has yet to be crossed but should happen tomorrow...

18:03

Terrorists Turn To Bitcoin For Funding, and They're Learning Fast [Slashdot]

An anonymous reader quotes a report from The New York Times: Hamas, the militant Palestinian group, has been designated a terrorist organization by Western governments and some others and has been locked out of the traditional financial system. But this year its military wing has developed an increasingly sophisticated campaign to raise money using Bitcoin. In the latest version of the website set up by the wing, known as the Qassam Brigades, every visitor is given a unique Bitcoin address where he or she can send the digital currency, a method that makes the donations nearly impossible for law enforcement to track. The site, which is available in seven languages and features the brigades' logo, with a green flag and a machine gun, contains a well-produced video that explains how to acquire and send Bitcoin without tipping off the authorities. Terrorists have been slow to join other criminal elements that have been drawn to Bitcoin and have used it for everything from drug purchases to money laundering. But in recent months, government authorities and organizations that track terrorist financing have begun to raise alarms about an uptick in the number of Islamist terrorist organizations experimenting with Bitcoin and other digital coins. The yields from individual campaigns appear to be modest -- in the tens of thousands of dollars. But the authorities note that terrorist attacks often require little funding. And the groups' use of cryptocurrencies appears to be getting more sophisticated. The Middle East Media Research Institute, a nonprofit that tracks and translates communication from terrorist groups, is about to publish a 253-page report about the increased signs of cryptocurrency use by terrorist organizations. According to the NYT, the report will focus on groups in Syria that are on the run as Islamic militants have lost almost all the territory they used to hold.

Read more of this story at Slashdot.

18:02

The Pwn Star State: Nearly two dozen Texas towns targeted by tiresome ransomware [The Register]

Officials suspect a coordinated extortion campaign

Twenty-three towns in Texas have been targeted with ransomware in what appears to be a coordinated attack.…

17:38

Behold, the quantum lawsuit in which both sides claim victory: Rimini St fails to bag $30m refund from Oracle [The Register]

Order banning any further infringement stays, as does Big Red's legal bill

The quantum legal battle between Oracle and Rimini Street continues, with an appeals judge this month confirming Rimini can't claw back the $28.5m it was forced to cough up to foot Oracle's lawyer bills. And, yes, Rimini is still banned from ripping off Oracle's intellectual property.…

17:20

How Malformed Packets Caused CenturyLink's 37-Hour, Nationwide Outage [Slashdot]

Ars Technica reports on what went wrong last December when CenturyLink had a nationwide, 37-hour outage that disrupted 911 service for millions of Americans and prevented completion of at least 886 calls to 911. From the report: Problems began the morning of December 27 when "a switching module in CenturyLink's Denver, Colorado node spontaneously generated four malformed management packets," the FCC report said. CenturyLink and Infinera, the vendor that supplied the node, told the FCC that "they do not know how or why the malformed packets were generated." Malformed packets "are usually discarded immediately due to characteristics that indicate that the packets are invalid," but that didn't happen in this case, the FCC report explained: "In this instance, the malformed packets included fragments of valid network management packets that are typically generated. Each malformed packet shared four attributes that contributed to the outage: 1) a broadcast destination address, meaning that the packet was directed to be sent to all connected devices; 2) a valid header and valid checksum; 3) no expiration time, meaning that the packet would not be dropped for being created too long ago; and 4) a size larger than 64 bytes." The switching module sent these malformed packets "as network management instructions to a line module," and the packets "were delivered to all connected nodes," the FCC said. Each node that received the packet then "retransmitted the packet to all its connected nodes." The report continued: "Each connected node continued to retransmit the malformed packets across the proprietary management channel to each node with which it connected because the packets appeared valid and did not have an expiration time. This process repeated indefinitely. The exponentially increasing transmittal of malformed packets resulted in a never-ending feedback loop that consumed processing power in the affected nodes, which in turn disrupted the ability of the nodes to maintain internal synchronization. Specifically, instructions to output line modules would lose synchronization when instructions were sent to a pair of line modules, but only one line module actually received the message. Without this internal synchronization, the nodes' capacity to route and transmit data failed. As these nodes failed, the result was multiple outages across CenturyLink's network." While CenturyLink dispatched network engineers to log in to affected nodes and removed the Denver node that had generated the malformed packets, the outage continued because "the malformed packets continued to replicate and transit the network, generating more packets as they echoed from node to node," the FCC wrote. Just after midnight, at least 20 hours after the problem began, CenturyLink engineers "began instructing nodes to no longer acknowledge the malformed packets." They also "disabled the proprietary management channel, preventing it from further transmitting the malformed packets." The FCC report said that CenturyLink could have prevented the outage or lessened its negative effects by disabling the system features that were not in use, using stronger filtering to prevent the malformed packets from propagating, and setting up "memory and processor utilization alarms" in its network monitoring.

Read more of this story at Slashdot.

16:40

Sony Buys Spider-Man Developer Insomniac Games [Slashdot]

Sony has purchased the California-based game studio Insomniac Games, best known for last year's Spider-Man on PS4, which sold 13.2 million copies. Sony says Insomniac will become an exclusive PlayStation developer. Kotaku reports: Founded in 1994, Insomniac remained independent for 25 years, working largely with Sony on series like Ratchet & Clank and Resistance but also with other big game companies like Microsoft, which published the colorful open-world game Sunset Overdrive (unlikely to get a sequel any time soon). Insomniac has also worked on several VR games with Oculus, including the upcoming Stormland, currently announced as an Oculus Rift exclusive. Notably, Insomniac's previous VR games have not been released on PlayStation VR.

Read more of this story at Slashdot.

16:01

OBS Studio 24.0 Will Let You Pause While Recording, Other New Options [Phoronix]

For those using OBS Studio for cross-platform live-streaming and screen recording needs, OBS Studio 24.0 is on the way but out first is their release candidate to vet the new features coming into this big update...

16:00

Paging Big Brother: In Amazon's Bookstore, Orwell Gets a Rewrite [Slashdot]

As fake and illegitimate texts proliferate online, books are becoming a form of misinformation. The author of "1984" would not be surprised. From a report: In George Orwell's "1984," the classics of literature are rewritten into Newspeak, a revision and reduction of the language meant to make bad thoughts literally unthinkable. "It's a beautiful thing, the destruction of words," one true believer exults. Now some of the writer's own words are getting reworked in Amazon's vast virtual bookstore, a place where copyright laws hold remarkably little sway. Orwell's reputation may be secure, but his sentences are not. Over the last few weeks I got a close-up view of this process when I bought a dozen fake and illegitimate Orwell books from Amazon. Some of them were printed in India, where the writer is in the public domain, and sold to me in the United States, where he is under copyright. Others were straightforward counterfeits, like the edition of his memoir "Down and Out in Paris and London" that was edited for high school students. The author's estate said it did not give permission for the book, printed by Amazon's self-publishing subsidiary. Some counterfeiters are going as far as to claim Orwell's classics as their own property, copyrighting them with their own names.

Read more of this story at Slashdot.

15:20

Cerebras Systems Unveils a Record 1.2 Trillion Transistor Chip For AI [Slashdot]

An anonymous reader quotes a report from VentureBeat: New artificial intelligence company Cerebras Systems is unveiling the largest semiconductor chip ever built. The Cerebras Wafer Scale Engine has 1.2 trillion transistors, the basic on-off electronic switches that are the building blocks of silicon chips. Intel's first 4004 processor in 1971 had 2,300 transistors, and a recent Advanced Micro Devices processor has 32 billion transistors. Samsung has actually built a flash memory chip, the eUFS, with 2 trillion transistors. But the Cerebras chip is built for processing, and it boasts 400,000 cores on 42,225 square millimeters. It is 56.7 times larger than the largest Nvidia graphics processing unit, which measures 815 square millimeters and 21.1 billion transistors. The WSE also contains 3,000 times more high-speed, on-chip memory and has 10,000 times more memory bandwidth.

Read more of this story at Slashdot.

15:19

Canadian ISP Telus launches novel solution to deal with excess email: Crash your servers and wipe it all [The Register]

Dell-EMC storage blunder leaves Canucks fuming for four days

Dealing with email is possibly the most tedious daily exercise that the modern digital world has forced on us. But 13 million customers of Canadian ISP Telus have discovered that not having that problem is more of a burden.…

14:45

Newt Gingrich Trying To Sell Trump on a Cheap Moon Plan [Slashdot]

WindBourne writes: Newt Gingrich and an eclectic band of NASA skeptics are trying to sell President Donald Trump on a reality show-style plan to jump-start the return of humans to the moon -- at a fraction of the space agency's estimated price tag. The proposal, whose other proponents range from an Air Force lieutenant general to the former publicist for pop stars Michael Jackson and Prince, includes a $2 billion sweepstakes pitting billionaires Elon Musk, Jeff Bezos and other space pioneers against each other to see who can establish and run the first lunar base, according to a summary of the plan shared with POLITICO. That's far less taxpayer money than NASA's anticipated lunar plan, which relies on traditional space contractors, such as Boeing and Lockheed Martin, and is projected to cost $50 billion or more. Backers of the novel approach have briefed administration officials serving on the National Space Council, several members of the group confirmed, though they declined to provide specifics of the internal conversations.

Read more of this story at Slashdot.

14:28

Dear Planet Earth: Patch Webmin now – zero-day exploit emerges for potential hijack hole in server control panel [The Register]

Flawed code traced to home build system, vulnerability can be attacked in certain configs

Updated  The maintainers of Webmin – an open-source application for system-administration tasks on Unix-flavored systems – have released Webmin version 1.930 and the related Usermin version 1.780 to patch a vulnerability that can be exploited to achieve remote code execution in certain configurations.…

14:18

Generous Google gives Chrome users Inbox Zero: Sign-in outage boots own browser out of webmail, services [The Register]

Baffling bug forces folks to use Safari, IE, etc

A bizarre outage left unlucky Chrome users unable to sign into Google services, from Gmail to Google Docs to even Chromebooks, earlier today.…

14:04

Bernie Sanders Wants To Ban Facial Recognition Use By Police [Slashdot]

Democratic presidential candidate Senator Bernie Sanders (I-VT) wants to put an end to police use of facial recognition software. Sanders called for the ban as part of a criminal justice reform plan introduced over the weekend ahead of a two-day tour of South Carolina. From a report: The plan also calls for the ban of for-profit prisons and would revoke the practice of law enforcement agencies benefiting from civil asset forfeitures. Sanders kicked off his campaign by saying "I'm running for president because we need to understand that artificial intelligence and robotics must benefit the needs of workers, not just corporate America and those who own that technology."

Read more of this story at Slashdot.

13:21

Hacker Releases First Public Jailbreak for Up-to-Date iPhones in Years [Slashdot]

Apple has mistakenly made it a bit easier to hack iPhone users who are on the latest version of its mobile operating system iOS by unpatching a vulnerability it had already fixed. From a report: Hackers quickly jumped on this over the weekend, and publicly released a jailbreak for current, up-to-date iPhones -- the first free public jailbreak for a fully updated iPhone that's been released in years. Security researchers found this weekend that iOS 12.4, the latest version released in June, reintroduced a bug found by a Google hacker that was fixed in iOS 12.3. That means it's currently relatively easy to not only jailbreak up to date iPhones, but also hack iPhone users, according to people who have studied the issue. "Due to 12.4 being the latest version of iOS currently available and the only one which Apple allows upgrading to, for the next couple of days (till 12.4.1 comes out), all devices of this version (or any 11.x and 12.x below 12.3) are jail breakable -- which means they are also vulnerable to what is effectively a 100+ day exploit," said Jonathan Levin, a security researcher and trainer who specializes in iOS, referring to the fact that this vulnerability can be exploited with code that was found more than 100 days ago. Pwn20wnd, a security researcher who develops iPhone jailbreaks, published a jailbreak for iOS 12.4 on Monday.

Read more of this story at Slashdot.

13:07

Trump blinks again in trade war bluff-fest with China: Huawei gets another 90-day stay of US import execution [The Register]

I want to get Huawei, I want to fry Huawei, yeah, yeah, yeah

Uncle Sam today granted another "extension" to Huawei, allowing the Chinese equipment manufacturer to continue to buy and use American electronic components and software despite being on an "entity list" of banned recipients of US tech.…

12:41

An Ode To Microsoft Encarta [Slashdot]

Scott Hanselman: Microsoft Encarta came out in 1993 and was one of the first CD-ROMs I had. It stopped shipping in 2009 on DVD. I recently found a disk and was impressed that it installed just perfectly on my latest Window 10 machine and runs nicely. Encarta existed in an interesting place between the rise of the internet and computer's ability to deal with (at the time) massive amounts of data. CD-ROMs could bring us 700 MEGABYTES which was unbelievable when compared to the 1.44MB (or even 120KB) floppy disks we were used to. The idea that Encarta was so large that it was 5 CD-ROMs (!) was staggering, even though that's just a few gigs today. Even a $5 USB stick could hold Encarta - twice! My kids can't possibly intellectualize the scale that data exists in today. We could barely believe that a whole bookshelf of Encyclopedias was now in our pockets. I spent hours and hours just wandering around random articles in Encarta. The scope of knowledge was overwhelming, but accessible. But it was contained - it was bounded. Today, my kids just assume that the sum of all human knowledge is available with a single search or a "hey Alexa" so the world's mysteries are less mysteries and they become bored by the Paradox of Choice. In a world of 4k streaming video, global wireless, and high-speed everything, there's really no analog to the feeling we got watching the Moon Landing as a video in Encarta - short of watching it live on TV in the 1969! For most of us, this was the first time we'd ever seen full-motion video on-demand on a computer in any sort of fidelity - and these are mostly 320x240 or smaller videos!

Read more of this story at Slashdot.

12:02

Developers Accuse Apple of Anti-Competitive Behavior With Its Privacy Changes in iOS 13 [Slashdot]

A group of app developers have penned a letter to Apple CEO Tim Cook, arguing that certain privacy-focused changes to Apple's iOS 13 operating system will hurt their business. From a report: In a report by The Information, the developers were said to have accused Apple of anti-competitive behavior when it comes to how apps can access user location data. With iOS 13, Apple aims to curtail apps' abuse of its location-tracking features as part of its larger privacy focus as a company. Today, many apps ask users upon first launch to give their app the "Always Allow" location-tracking permission. Users can confirm this with a tap, unwittingly giving apps far more access to their location data than is actually necessary, in many cases. In iOS 13, however, Apple has tweaked the way apps can request location data. There will now be a new option upon launch presented to users, "Allow Once," which allows users to first explore the app to see if it fits their needs before granting the app developer the ability to continually access location data. This option will be presented alongside existing options, "Allow While Using App" and "Don't Allow." The "Always" option is still available, but users will have to head to iOS Settings to manually enable it. The app developers argue that this change may confuse less technical users, who will assume the app isn't functioning properly unless they figure out how to change their iOS Settings to ensure the app has the proper permissions.

Read more of this story at Slashdot.

12:00

Approved: Fedora 31 To Drop i686 Everything/Modular Repositories [Phoronix]

The month-old proposal for the upcoming Fedora 31 Linux distribution release to stop with their i686 repositories for Everything and Modules was voted on today by the Fedora Engineering and Steering Committee...

11:21

Twitter is Blocked in China, But Chinese State News Agency is Buying Promoted Tweets To Portray Hong Kong Protestors as Violent [Slashdot]

Chinese state-run news agency Xinhua is promoting tweets attacking the protestors and claiming they do not have wider support. From a report: Twitter is being criticized for running promoted tweets by China's largest state news agency that paint pro-democracy demonstrations in Hong Kong as violent, even though the rallies, including one that drew an estimated 1.7 million people this weekend, have been described as mostly peaceful by international media. Promoted tweets from China Xinhua News, the official mouthpiece of the Chinese Communist Party, were spotted and shared by the Twitter account of Pinboard, the bookmarking service founded by Maciej Ceglowski, and other users. The demonstrations began in March to protest a now-suspended extradition bill, but have grown to encompass other demands, including the release of imprisoned protestors, inquiries into police conduct, the resignation of current Chief Executive of Hong Kong Carrie Lam and a more democratic process for electing Legislative Council members and the chief executive. UPDATE: Twitter is now blocking state-run media outlets from advertising on its platform.

Read more of this story at Slashdot.

10:41

Small Companies Play Big Role in Robocall Scourge, But Remedies Are Elusive [Slashdot]

The billions of illegal robocalls inundating Americans are being facilitated largely by small telecom carriers that transmit calls over the internet, industry officials say, but authorities are at odds over what -- if anything -- they can do to stop them. From a report: These telecom carriers typically charge fractions of a cent per call, making their money on huge volume. Their outsize role in the robocall scourge has become apparent as large telecom companies get better at tracing robocalls to their source, spurring calls for regulators to hold them accountable. "There are definitely repeat offenders who keep showing up as the sources of illegal robocalls," said Patrick Halley, a senior vice president at USTelecom, a trade association of telecom companies that runs a robocall-tracing group. "Carriers that knowingly allow the origination of billions of illegal robocalls should be held accountable." U.S. regulators have conflicting interpretations of their ability to take the companies to court, however. And carriers aren't explicitly required to try to differentiate between legal and illegal robocalls, further clouding enforcement.

Read more of this story at Slashdot.

10:39

AMD Posts Navi Display Stream Compression Support For Linux [Phoronix]

One of the kernel-side features not yet in place for AMD's newest Navi graphics processors on Linux has been Display Stream Compression support but that is being squared away with a new patch series...

10:01

Wireless Carrier Throttling of Online Video Is Pervasive: Study [Slashdot]

U.S. wireless carriers have long said they may slow video traffic on their networks to avoid congestion and bottlenecks. But new research shows the throttling happens pretty much everywhere all the time. From a report: Researchers from Northeastern University and University of Massachusetts Amherst conducted more than 650,000 tests in the U.S. and found that from early 2018 to early 2019, AT&T throttled Netflix 70% of the time and Google's YouTube service 74% of the time. But AT&T didn't slow down Amazon's Prime Video at all. T-Mobile throttled Amazon Prime Video in about 51% of the tests, but didn't throttle Skype and barely touched Vimeo, the researchers say in a paper [PDF] to be presented at an industry conference this week.

Read more of this story at Slashdot.

09:40

Microsoft gets some jClarity on Azure Java workloads, swallows London-based firm [The Register]

Write once, optimise everywhere amirite?

Microsoft has snapped up London-based jClarity in an effort to bump up the performance of Java workloads on Azure.…

09:36

Saturday Morning Breakfast Cereal - Complex [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I swear 'in extremis' is a permissible phrase, Google spellcheck be damned.


Today's News:

09:26

POWER9 & ARM Performance Against Intel Xeon Cascadelake + AMD EPYC Rome [Phoronix]

For those wondering how ARM and IBM POWER hardware stack up against AMD's new EPYC "Rome" processors and that of Intel's existing Xeon "Cascade Lake" processors, here is a round of tests from the POWER9 Talos II, Ampere eMAG, and Cavium ThunderX in looking at the cross-architecture Linux CPU performance currently in the server space.

09:21

Degrading Tor Network Performance Only Costs a Few Thousand Dollars Per Month [Slashdot]

Threat actors or nation-states looking into degrading the performance of the Tor anonymity network can do it on the cheap, for only a few thousands US dollars per month, new academic research has revealed. An anonymous reader writes: According to researchers from Georgetown University and the US Naval Research Laboratory, threat actors can use tools as banal as public DDoS stressers (booters) to slow down Tor network download speeds or hinder access to Tor's censorship circumvention capabilities. Academics said that while an attack against the entire Tor network would require immense DDoS resources (512.73 Gbit/s) and would cost around $7.2 million per month, there are far simpler and more targeted means for degrading Tor performance for all users. In research presented this week at the USENIX security conference, the research team showed the feasibility and effects of three types of carefully targeted "bandwidth DoS [denial of service] attacks" that can wreak havoc on Tor and its users. Researchers argue that while these attacks don't shut down or clog the Tor network entirely, they can be used to dissuade or drive users away from Tor due to prolongued poor performance, which can be an effective strategy in the long run.

Read more of this story at Slashdot.

09:00

Four more years! Four more years! Svelte Linux desktop Xfce gets first big update since 2015 [The Register]

Hop from 4.12 to 4.14 fixes 'a boatload of bugs'. Hooray!

In contrast to the frenetic pace of updates now typical in the software industry, the team behind Xfce, a lightweight desktop for Linux, have released version 4.14 nearly four-and-a-half years since the last stable release, 4.12.…

08:40

The Latest Claim To Satoshi Nakamoto is the 'Stupidest One Yet' [Slashdot]

An anonymous reader shares a report: For years, Faketoshis have been fighting to claim the Bitcoin throne, trying to make us all believe they were responsible for the cryptocurrency's creation. But things took a different turn this weekend after an unknown person(s) decided it was time to reveal their identity as the 'real' Satoshi Nakomoto in a three-part blog post series. Possibly exhausted by peoples' previous attempts to do the same, and having noticed several significant inconsistencies in the person's writing, it didn't take long for Bitcoin Twitter to react and call Faketoshi's claims into question. Further reading: How the NSA Identified Satoshi Nakamoto (2017); Bizarre New Theories Emerge About Bitcoin Creator Satoshi Nakamoto (2019); The CIA 'Can Neither Confirm Nor Deny' It Has Documents on Satoshi Nakamoto (2018); Craig Wright Claims He's Satoshi Nakamoto, the Creator Of Bitcoin (2016); Former Bitcoin Developer Shares Early Satoshi Nakamoto Emails (2017); He Says He Invented Bitcoin and Is Suing Those Who Doubt Him (2019); Elon Musk Says He Is Not Bitcoin's Satoshi Nakamoto (2017); Satoshi Nakamoto Found? Not So Fast (2014); Bitcoin Releases Version 0.3 (2010).

Read more of this story at Slashdot.

08:04

Dry patch? Have you considered peppering your flirts with emojis? [The Register]

Research suggests cutesy comms aid can get you laid

Had much, you know, 👌👈 recently? Perhaps you need to ⬆️ your emoji 🙃 game 🎮 as new research 👨‍🎓 has linked ⛓ using the cutesy online comms aid with going on more dates 💑 and getting laid 💦🍆.…

08:00

Fearing Data Privacy Issues, Google Cuts Some Android Phone Data For Wireless Carriers [Slashdot]

Alphabet' Google has shut down a service it provided to wireless carriers globally that showed them weak spots in their network coverage, Reuters reported Monday, citing people familiar with the matter, because of Google's concerns that sharing data from users of its Android phone system might attract the scrutiny of users and regulators. From the report: The withdrawal of the service, which has not been previously reported, has disappointed wireless carriers that used the data as part of their decision-making process on where to extend or upgrade their coverage. Even though the data were anonymous and the sharing of it has become commonplace, Google's move illustrates how concerned the company has become about drawing attention amid a heightened focus in much of the world on data privacy. Google's Mobile Network Insights service, which had launched in March 2017, was essentially a map showing carriers signal strengths and connection speeds they were delivering in each area. The service was provided free to carriers and vendors that helped them manage operations. The data came from devices running Google's Android operating system, which is on about 75% of the world's smartphones, making it a valuable resource for the industry. [...] Nevertheless, Google shut down the service in April due to concerns about data privacy, four people with direct knowledge of the matter told Reuters. Some of them said secondary reasons likely included challenges ensuring data quality and connectivity upgrades among carriers being slow to materialize.

Read more of this story at Slashdot.

07:21

Qt 6 Will Bring Improvements To The Toolkit's Python Support [Phoronix]

Adding to the interesting objectives for Qt 6 are further enhancements to "Qt for Python" for enhancing the programming language's support for this tool-kit...

07:03

Teen TalkTalk hacker ordered to pay £400k after hijacking popular Instagram account [The Register]

Sanitised browser history sparked another investigation

One of the crew who hacked TalkTalk has been ordered to hand over £400,000 after seizing control of a high-profile Instagram account following a hack on Aussie telco Telstra.…

06:59

Intel Icelake Thunderbolt Support Still Being Squared Away For Linux - Hopefully For 5.4 [Phoronix]

Intel Icelake laptops will soon be hitting store shelves and a vast majority of the Linux support has been squared away for many months. Unfortunately one bit still not mainlined is the Thunderbolt support...

06:05

Microsoft Notepad: If it ain't broke, shove it in the Store, then break it? [The Register]

For the love of Windows, please leave that poor text editor alone

Roundup  It's the summer holidays. A good time to do things while nobody's watching. Except The Register, of course. Aside from sneaking Notepad into the Windows Store, last week Microsoft gave Insiders a new 2020 Windows 10 build, added features back into Skype, rounded out Azure's persistent disk storage and prepared a Typescript update.…

05:09

A POWER'ful Announcement Is Expected Tomorrow Changing The Open-Source Landscape [Phoronix]

For those interested in IBM's POWER architecture and/or open-source hardware prospects, an industry-shaking announcement is expected to happen Tuesday morning...

05:05

So your Google Play Publisher account has been terminated – of course you would want to know why exactly [The Register]

Is the platform's abuse policy unfair to genuine developers?

Developer Patrick Godeau has claimed his business is under threat after his Google Play Publisher account was terminated without a specific reason given.…

04:58

System76 Still Aiming To Be The Apple Of The Linux Space With Software & Hardware [Phoronix]

System76 continues doing much more work on software these days as well as expanding their own hardware manufacturing capabilities. This is much more than they did a decade or even several years ago when they were just selling PCs/laptops pre-loaded with Ubuntu. As summed up by System76 founder and CEO, Carl Richell, their end game is much more Apple-esque...

04:34

The US Army Wants To Microwave Drones in Midair [Slashdot]

"The U.S. Army, as part of a broad counter-unmanned aerial systems strategy, is pushing forward with the U.S. Air Force to develop a high-powered microwave weapon," reports Popular Mechanics: Microwave radiation can disrupt or destroy electronic equipment exposed to them, "cooking" internal circuits much in the same way a fork or other metal objects placed in a microwave oven will cause the oven's electronics to melt down. Here's 2018 footage of a Raytheon HPM system tested at Fort Sill in 2018. The Pentagon has researched high powered microwave weapons for years, but the threat of drone swarms may have presented it with the perfect threat. The military is preparing for the eventuality of facing swarms of suicide drones on the battlefield, each carrying an explosive payloads or prepared to make a suicide attack. Current anti-drone weapons include jammers, shotguns, nets, and even birds, but many of these weapons are only effective against one or a small number of drones at once, and not the dozens or more drones envisioned in the worst drone swarm scenarios.... Microwave radiation doesn't care about rain and other inclement weather, it doesn't rely on individual shots of ammunition, and as long as the electrical generator powering is powered on, it will continue to "fire"... The weapon's broad firing arc means it could take out many drones at once, defeating enemy drone swarms. The joint Army/Air Force microwave weapon prototype "should be operational by 2022."

Read more of this story at Slashdot.

04:28

NetBSD Sees Its First Wayland Application Running [Phoronix]

Wayland support is inching ahead on NetBSD for this secure, modern next-generation successor to running an X.Org Server...

04:08

KNOB turns up the heat on Bluetooth encryption, hotels leak guest info, city hands $1m to crook, and much, much more [The Register]

Spec design flaw stiffs security of gizmos

Roundup  Let's run through all the bits and bytes of security news beyond what we've already covered. Also, don't forget our articles from this year's Black Hat, DEF CON, and BSides Las Vegas conferences in the American desert.…

03:04

iFrame clickjacking countermeasures appear in Chrome source code. And it only took *checks calendar* three years [The Register]

After inaction, technical changes promise better fraud defense

Three years ago, Google software engineer Ali Juma proposed that Chrome should be modified to ignore recently moved iframe elements on web pages as a defense against clickjacking.…

02:11

Subcontractor's track record under spotlight as London Mayoral e-counting costs spiral [The Register]

Bill approaching £9m compared to £4.1m in last procurement process

Concerns have been raised over a key supplier of an e-counting system for the London Mayoral elections in 2020.…

02:00

Command line quick tips: Searching with grep [Fedora Magazine]

If you use your Fedora system for more than just browsing the web, you have probably needed to search for text in your files. For instance, you might be a developer that can’t remember where you left some code snippet. Or you might be looking for a setting stored in your system configuration files. Whatever the reason, there are plenty of ways to search for text on your Fedora system. This article will show you how, including using the built-in utility grep.

Introducing grep

The grep utility allows you to search for text, or more specifically text patterns, on your file system. The name grep comes from global regular expression print. Yikes, what a mouthful! This is because a regular expression (or regex) is a way of defining text patterns.

The grep utility lets you find and print out matches on these patterns — thus the name. It’s a powerful system, and you can even find it in modern code editors like Visual Studio Code or Atom.

Regular expressions

Harnessing all the power of regular expressions is a topic bigger than this article, for sure. The simplest kind of regex can be just a word, or a portion of a word. That pattern is simply “the following characters, in the same order.” The pattern is searched line by line. For example:

  • pciutil – matches any time the 7 characters pciutil appear together — including pciutil, pciutils, pciutil123, and foopciutil.
  • ^pciutil – matches any time the 7 characters pciutil appear together immediately at the beginning of a line (that’s what the ^ stands for)
  • pciutil$ – matches any time the 7 characters pciutil appear together immediately before the end of a line (that’s what the $ stands for)

More complicated expressions are also possible. Special characters are used in a regex as wildcards, or to change the way the regex works. If you want to match on one of these characters, use a \ (backslash) before the character.

For instance, the . (period or full stop) is a wildcard that matches any single character. If you use it in the expression pci.til, it matches pciutil, pci4til, or pci!til, but does not match pcitil. There must be a character to match the . in the regular expression.

The ? is a marker in a regex that marks the previous element as optional. So if you built on the previous example, the expression pci.?til would also match on pcitil because there need not be a character between i and t for a valid match.

The + and * are markers that stand for repetition. While + stands for one or more of the previous element, * stands for zero or more. So the regex pci.+til would match any of these: pciutil, pci4til, pci!til, pciuuuuuutil, pci423til. However, it wouldn’t match pcitil — but the regex pci.*til would.

Examples of grep

Now that you know a little about regex, let’s put it to work. Imagine that you’re trying to find a configuration file that mentions a user account jpublic. You tried a bunch of files already, but none were the correct one, and you’re sure it’s there. So, try searching the /etc folder (using sudo because some subfolders are not readable outside the root account):

$ sudo grep -r jpublic /etc/

The -r switch searches the folder recursively. The utility prints a list of matching files, and the line where the hit occurred. In most modern terminal environments, the hit is color highlighted for better readability.

Imagine you have a much larger selection of files in /home/shared and you need to establish which ones mention the name MacNulty. However, you’re not sure whether the capitalization will be consistent, and you’re just looking for names of files, not the context. Also, you believe someone may have misspelled the name as McNulty in some places.

Use the -l switch to only output filenames with a match, a ? marker for optional a in the name, and -i to make the search case-insensitive:

$ sudo grep -irl 'ma\?cnulty' /home/shared

This command will match on strings like Macnulty, McNulty, Mcnulty, and macNulty with no problem. You’ll get a simple list of filenames where the match was found in the contents.

These are only the simplest ways to use grep and regular expressions. You can learn a lot more about both using the info grep command.

But wait, there’s more…

The grep command is venerable but in some situations may not be as efficient as newer search utilities. For instance, the ripgrep utility is engineered to be a fast search utility that can take the place of grep. We covered ripgrep as part of an article on Rust and Rust applications previously in the Magazine:

It’s important to note that ripgrep has its own command line switches and syntax. For example, it has simple switches to print only filename matches, invert searches, and many other useful functions. It can also ignore based on .rgignore files placed in any subdirectories. (It’s also noteworthy that the -r switch is used differently for ripgrep, because it is automatically recursive.)

To install, use this command:

$ sudo dnf install ripgrep

To explore the options, use the manual page (man rg). You’ll find that many, but not all, options are the same as grep.

Have fun searching!


01:34

PayPal Builds 'Zoid' JavaScript Library To 'Make IFrames Cool Again' [Slashdot]

"Earlier this year I gave a talk at FullStack conference in London about making iFrames cool again," writes a lead engineer at PayPal. In a nutshell: iframes let you build user experiences into embeddable 'cross-domain components', which let users interact with other sites without being redirected. There are a metric ton of awesome uses for that other than tracking and advertizing. Nothing else comes close for this purpose; and as a result, I feel we're not using iframes to their full potential. There are big problems, though... My talk went into how at PayPal, we built Zoid to solve some of the major problems with iframes and popups: - Pre-render to avoid the perception of slow rendering - Automatically resize frames to fit child content - Automatically resize frames to fit child content - Pass down any kind of data and functions/callbacks as props (just like React), and avoid the nightmare of cross-domain messaging between windows. - Make iframes and popups feel like first class (cross-domain) components. Zoid goes a long way. But there are certain problems a mere javascript library can not solve. This is my bucket list for browser vendors, to make iframes more of a first class citizen on the web... Because fundamentally: the idea of cross-domain embeddable components is actually pretty useful once you start talking about shareable user experiences, rather than just user-tracking and advertizing which are obviously pills nobody enjoys swallowing. He acknowledges that he "really likes" the work that's been done on Google Chrome's Portals (which he earlier described as "like iframes, but better, and worse.") "I just hope iframes don't get left behind."

Read more of this story at Slashdot.

01:01

It will never be safe to turn off your computer: Prankster harnesses the power of Windows 95 to torment fellow students [The Register]

Screen says 'Data save failed' so it must be true

Who, Me?  The weekend is over and that means another tale of reader misdeeds to kick-start your Monday with our regular column, Who, Me?

00:21

Can Amazon's AI really detect fear? Plus: Fresh deepfake video freaks everyone out again [The Register]

Nvidia is pleased with its latest numbers, and more

Roundup  Our weekly AI roundup is back from a little summer break, and once again covering bits and pieces from the world of machine learning beyond what's already been reported by Team Register.…

Sunday, 18 August

22:34

Massive Ransomware Attack Hits 23 Local Texas Government Offices [Slashdot]

Long-time Slashdot reader StonyCreekBare shared this press release from the Texas Department of Information Resources (Dir) press release as of August 17, 2019, at approximately 5:00 p.m. central time: On the morning of August 16, 2019, more than 20 entities in Texas reported a ransomware attack. The majority of these entities were smaller local governments... At this time, the evidence gathered indicates the attacks came from one single threat actor. Investigations into the origin of this attack are ongoing; however, response and recovery are the priority at this time. It appears all entities that were actually or potentially impacted have been identified and notified. Twenty-three entities have been confirmed as impacted. Responders are actively working with these entities to bring their systems back online. The State of Texas systems and networks have not been impacted.

Read more of this story at Slashdot.

19:34

A New Idea For Fighting Rising Sea Levels: Iceberg-Making Submarines [Slashdot]

To address the affects of global warming, a team of designers "propose building ice-making submarines that would ply polar waters and pop out icebergs to replace melting floes," reports NBC News: "Sea level rise due to melting ice should not only be responded [to] with defensive solutions," the designers of the submersible iceberg factory said in an animated video describing the vessel, which took second place in a recent design competition held by the Association of Siamese Architects. The video shows the proposed submarine dipping slowly beneath the ocean surface to allow seawater to fill its large hexagonal well. When the vessel surfaces, an onboard desalination system removes the salt from the water and a "giant freezing machine" and chilly ambient temperatures freeze the fresh water to create the six-sided bergs. These float away when the vessel resubmerges and starts the process all over again. A fleet of the ice-making subs, operating continuously, could create enough of the 25-meter-wide "ice babies" to make a larger ice sheet, according to the designers. Faris Rajak Kotahatuhaha, an architect in Jakarta and the leader of the project, said he sees the design as a complement to ongoing efforts to curb emissions. "Experts praised the designers' vision but cast doubt on the project's feasibility."

Read more of this story at Slashdot.

17:45

Stack Overflow Touts New Programming Solutions Tool That Mines Crowd Knowledge [Slashdot]

Stack Overflow shares a new tool from a team of researchers that "takes the description of a programming task as a query and then provides relevant, comprehensive programming solutions containing both code snippets and their succinct explanations" -- the Crowd Knowledge Answer Generator (or CROKAGE): In order to reduce the gap between the queries and solutions, the team trained a word-embedding model with FastText, using millions of Q&A threads from Stack Overflow as the training corpus. CROKAGE also expanded the natural language query (task description) to include unique open source software library and function terms, carefully mined from Stack Overflow. The team of researchers combined four weighted factors to rank the candidate answers... In particular, they collected the programming functions that potentially implement the target programming task (the query), and then promoted the candidate answers containing such functions. They hypothesized that an answer containing a code snippet that uses the relevant functions and is complemented with a succinct explanation is a strong candidate for a solution. To ensure that the written explanation was succinct and valuable, the team made use of natural language processing on the answers, ranking them most relevant by the four weighted factors. They selected programming solutions containing both code snippets and code explanations, unlike earlier studies. The team also discarded trivial sentences from the explanations... The team analyzed the results of 48 programming queries processed by CROKAGE. The results outperformed six baselines, including the state-of-art research tool, BIKER. Furthermore, the team surveyed 29 developers across 24 coding queries. Their responses confirm that CROKAGE produces better results than that of the state-of-art tool in terms of relevance of the suggested code examples, benefit of the code explanations, and the overall solution quality (code + explanation). The tool is still being refined, but it's "experimentally available" -- although "It's limited to Java queries for now, but the creators hope to have an expanded version open to the public soon." It will probably be more useful than Stack Roboflow, a site that uses a neural network to synthesize fake Stack Overflow questions.

Read more of this story at Slashdot.

16:39

A Major Cyber Attack Could Be Just As Deadly As Nuclear Weapons [Slashdot]

"As someone who studies cybersecurity and information warfare, I'm concerned that a cyberattack with widespread impact, an intrusion in one area that spreads to others or a combination of lots of smaller attacks, could cause significant damage, including mass injury and death rivaling the death toll of a nuclear weapon," warns an assistant Professor of Computer Science, North Dakota State University: Unlike a nuclear weapon, which would vaporize people within 100 feet and kill almost everyone within a half-mile, the death toll from most cyberattacks would be slower. People might die from a lack of food, power or gas for heat or from car crashes resulting from a corrupted traffic light system. This could happen over a wide area, resulting in mass injury and even deaths... The FBI has even warned that hackers are targeting nuclear facilities. A compromised nuclear facility could result in the discharge of radioactive material, chemicals or even possibly a reactor meltdown. A cyberattack could cause an event similar to the incident in Chernobyl. That explosion, caused by inadvertent error, resulted in 50 deaths and evacuation of 120,000 and has left parts of the region uninhabitable for thousands of years into the future. My concern is not intended to downplay the devastating and immediate effects of a nuclear attack. Rather, it's to point out that some of the international protections against nuclear conflicts don't exist for cyberattacks... Critical systems, like those at public utilities, transportation companies and firms that use hazardous chemicals, need to be much more secure... But all those systems can't be protected without skilled cybersecurity staffs to handle the work. At present, nearly a quarter of all cybersecurity jobs in the US are vacant, with more positions opening up than there are people to fill them. One recruiter has expressed concern that even some of the jobs that are filled are held by people who aren't qualified to do them. The solution is more training and education, to teach people the skills they need to do cybersecurity work, and to keep existing workers up to date on the latest threats and defense strategies.

Read more of this story at Slashdot.

15:56

Linux 5.3-rc5 Released Following A Calm Week [Phoronix]

Linus Torvalds just issued the Linux 5.3-rc5 kernel test release as we are now just a few weeks out from the official Linux 5.3 kernel debut...

15:38

XKCD Author Challenges Serena Williams To Attack A Drone [Slashdot]

In just 16 days XKCD author Randall Munroe releases a new book titled How To: Absurd Scientific Advice for Common Real-World Problems. He's just released an excerpt from the chapter "How to Catch a Drone," in which he actually enlisted the assistance of tennis star Serena Williams. An anonymous reader writes: Serena and her husband Alexis just happened to have a DJI Mavic Pro 2 with a broken camera -- and Munroe asked her to try to smash it with tennis balls. "My tentative guess was that a champion player would have an accuracy ratio around 50 when serving, and take 5-7 tries to hit a drone from 40 feet. (Would a tennis ball even knock down a drone? Maybe it would just ricochet off and cause the drone to wobble! I had so many questions.) "Alexis flew the drone over the net and hovered there, while Serena served from the baseline..." His blog has the rest of the story, and Munroe has even illustrated the experiment, promising that the book also contains additional anti-drone strategies, an analysis of other sports projectiles, and "a discussion with a robot ethicist about whether hitting a drone with a tennis ball is wrong."

Read more of this story at Slashdot.

15:04

Why Am I Receiving Unordered Boxes From Amazon? [Slashdot]

It's an unexpected surprise that's been popping up "all over the country," according to the Better Business Bureau. People are receiving boxes of unordered merchandise from Amazon. The companies, usually foreign, third-party sellers that are sending the items are simply using your address and your Amazon information. Their intention is to make it appear as though you wrote a glowing online review of their merchandise, and that you are a verified buyer of that merchandise. They then post a fake, positive review to improve their products' ratings, which means more sales for them. The payoff is highly profitable from their perspective... The fake online review angle is only one way they benefit...they also are increasing their sales numbers. After all, they aren't really purchasing the items since the payment goes right back to them.... Then there is the "porch pirate" angle. There have been instances where thieves used other people's mailing addresses and accounts, then watched for the delivery of the package so they can steal it from your door before you get it... The fact that someone was able to have the items sent to you as if you purchased them indicates that they probably have some of your Amazon account information. Certainly, they have your name and address and possibly, your phone number and a password. The company either hacked your account themselves or purchased the information from a hacker. The BBB notes that although it's strange to receive boxes of unordered merchandise, "You are allowed to keep it. The Federal Trade Commission says you have a legal right to keep unordered merchandise." "The bigger issue is: What do you do about your information having been obtained by crooks?"

Read more of this story at Slashdot.

14:34

Alexa, Siri, and Google Home Can Be Tricked Into Sending Callers To Scam Phone Numbers [Slashdot]

"Don't ask your smart device to look up a phone number, because it may accidentally point you to a scam," warn the consumer watchdogs at the Better Business Bureau: You need the phone number for a company, so you ask your home's smart device -- such as Google Home, Siri, or Alexa -- to find and dial it for you. But when the company's "representative" answers, the conversation takes a strange turn. This representative has some odd advice! They may insist on your paying by wire transfer or prepaid debit card. In other cases, they may demand remote access to your computer or point you to an unfamiliar website. Turns out, that this "representative" isn't from the company at all. Scammers create fake customer service numbers and bump them to the top of search results, often by paying for ads. When Siri, Alexa, or another device does a voice search, the algorithm may accidentally pick a scam number. One recent victim told BBB.org/ScamTracker that she used voice search to find and call customer service for a major airline. She wanted to change her seat on an upcoming flight, but the scammer tried to trick her into paying $400 in pre-paid gift cards by insisting the airline was running a special promotion. In another report, a consumer used Siri to call what he thought was the support number for his printer. Instead, he found himself in a tech support scam. People put their faith in voice assistants, even when they're just parroting the results from search engines, the BBB warns. The end result? "Using voice search to find a number can make it harder to tell a phony listing from the real one."

Read more of this story at Slashdot.

13:34

Should HTTPS Certificates Expire After Just 397 Days? [Slashdot]

Google has made a proposal to the unofficial cert industry group that "would cut lifespan of SSL certificates from 825 days to 397 days," reports ZDNet. No vote was held on the proposal; however, most browser vendors expressed their support for the new SSL certificate lifespan. On the other side, certificate authorities were not too happy, to say the least. In the last decade and a half, browser makers have chipped away at the lifespan of SSL certificates, cutting it down from eight years to five, then to three, and then to two. The last change occured in March 2018, when browser makers tried to reduce SSL certificate lifespans from three years to one, but compromised for two years after pushback from certificate authorities. Now, barely two years later after the last change, certificate authorities feel bullied by browser makers into accepting their original plan, regardless of the 2018 vote... This fight between CAs and browser makers has been happening in the shadows for years. As HashedOut, a blog dedicated to HTTPS-related news, points out, this proposal is much more about proving who controls the HTTPS landscape than everything. "If the CAs vote this measure down, there's a chance the browsers could act unilaterally and just force the change anyway," HashedOut said. "That's not without precendent, but it's also never happened on an issue that is traditionally as collegial as this. "If it does, it becomes fair to ask what the point of the CA/B Forum even is. Because at that point the browsers would basically be ruling by decree and the entire exercise would just be a farce." Security researcher Scott Helme "claims that this process is broken and that bad SSL certificates continue to live on for years after being mississued and revoked -- hence the reason he argued way back in early 2018 that a shorter lifespan for SSL certificates would fix this problem because bad SSL certs would be phased out faster." But the article also notes that Timothy Hollebeek, DigiCert's representative at the CA/B Forum argues that the proposed change "has absolutely no effect on malicious websites, which operate for very short time periods, from a few days to a week or two at most. After that, the domain has been added to various blacklists, and the attacker moves on to a new domain and acquires new certificates."

Read more of this story at Slashdot.

12:46

AMD Ryzen 5 3600X Linux Performance [Phoronix]

Now that the new AMD Ryzen 3000 series are running great with the latest Linux distributions following prominent motherboard vendors issuing BIOS updates that correct the "RdRand" issue, we're moving on with looking at the performance of the rest of the Ryzen 3000 series line-up while having freshly re-tested the processors under Ubuntu 19.04. Up for exploration today is the AMD Ryzen 5 3600X, the six-core / 12-thread processor retailing for about $250 USD.

12:34

Wells Fargo's Computer Kept Charging 'Overdrawn' Fees On Supposedly Closed Accounts [Slashdot]

The New York Times explains a new issue by describing what happened when Xavier Einaudi tried to close his Wells Fargo checking account. For weeks after the date the bank said the accounts would be closed, it kept some of them active. Payments to his insurer, to Google for online advertising and to a provider of project management software were paid out of the empty accounts in July. Each time, the bank charged Einaudi a $35 overdraft fee... By the middle of July, he owed the bank nearly $1,500. "I don't even know what happened," he said. Current and former bank employees said Einaudi was charged because of the way Wells Fargo's computer system handles closed accounts: An account the customer believes to be closed can stay open if it has a balance, even one below zero. And each time a transaction is processed for an overdrawn account, Wells Fargo tacks on a fee. The problem has gone unaddressed by the bank despite complaints from customers and employees, including one in the bank's debt-collection department who grew concerned after taking in an estimated $100,000 in overdraft fees over eight months... Most banks program their systems to stop honoring transactions on the specified date, but Wells Fargo allows accounts to remain open for two more months, according to current and former employees. Customers usually learn what happened only after their overdrawn accounts are sent to Wells Fargo's collections department. If the customers do not pay the overdraft fees, they are reported to a national database like Early Warning Services, which compiles names of delinquent bank customers. That often means a customer cannot open a new bank account anywhere, and getting removed from the lists can take hours' worth of phone calls.

Read more of this story at Slashdot.

11:34

Tech Companies Challenge 'Open Office' Trend With Pods [Slashdot]

Open floor plans create "a minefield of distractions," writes CNBC. But now they're being countered by a new trend that one office interior company's owner says "started with tech companies and the need for privacy." They're called "office pods..." They provide a quiet space for employees to conduct important phone calls, focus on their work or take a quick break. "We are seeing a large trend, a shift to having independent, self-contained enclosures," said Caitlin Turner, a designer at the global design and urban planning firm HoK. She said the growing demand for pods is a direct result of employees expressing their need for privacy... Prices can range anywhere from $3,495 for a single-user pod from ROOM to $15,995 for an executive suite from ZenBooth. Pod manufacturers are expanding rapidly. In addition to Zenbooth and ROOM, there are TalkBox, PoppinPod, Spaceworx and Framery. Pod sizes also vary to include individual booths designed for a single user, medium-sized pods for small gatherings of two to three people and larger executive spaces that could host up to four to six people. Sam Johnson, the founder of Zenbooth, said the idea for pods came from his experience working in the tech industry, where he quickly became disillusioned by the open floor plan. It was an "unsolved problem" that prompted him to quit his job and found ZenBooth, a pod company based in the Bay Area, in 2016. He said the company is a "privacy solutions provider" that offers "psychological safety" via a peaceful space to work and think. "We've had customers say to us that we literally couldn't do our job without your product," Johnson said. The company now counts Samsung, Intel, Capital One and Pandora, among others, as clients, as it works in tech hubs including Boston, the Bay Area, New York and Seattle. Its biggest customer, Lyft, has 35 to 40 booths at its facilities. "In 2014, 70% of companies had an open floor plan, according to the International Facility Management Association," the article points out -- though one Queensland University of Technology study found 90% of employees in open floor plan offices actually experienced more stress and conflict, along with higher blood pressure and increased turnover.

Read more of this story at Slashdot.

10:34

Slackware, the Longest Active Linux Distro, Finally Has a Patreon Page [Slashdot]

"Slackware is the longest active Linux distribution project, founded in 1993," writes TheBAFH (Slashdot reader #68,624). "Today there are many Linux distributions available, but I've remained dedicated to this project as I believe it still holds an important place in the Linux ecosystem," writes Patrick J. Volkerding on a new Patreon page. He adds that Slackware's users "know that Slackware can be trusted not to constantly change the way things work, so that your investment in learning Slackware lasts longer than it would with a system that's a moving target... Your support is greatly appreciated, and will make it possible for me to continue to maintain this project." TheBAFH writes: The authenticity of the Patreon page has been confirmed by Mr. Volkerding in a post in the Slackware forum of LinuxQuestions.org. "I was going to wait to announce it until I had a few more planned updates done in -current that would be getting things closer to an initial 15.0 beta release, but since it's been spotted in the wild I'll confirm it." Slashdot also emailed Patrick J. Volkerding at Slackware.com last summer and confirmed that that is indeed the account that he's posting from on LinuxQuestions. At the time, he was still trying to find the time to get a Patreon page set up. "I've been trying to catch up on nearly a decade of neglecting everything other than Slackware, but I'm at least getting more caught up."

Read more of this story at Slashdot.

10:02

Intel Tries Again To Auto Enable GuC/HuC Functionality For Their Linux Graphics Driver [Phoronix]

Intel previously tried auto-enabling GuC and HuC functionality within their Linux kernel graphics driver but ended up reverting the support since the driver didn't gracefully handle the scenarios of missing/corrupt firmware files. The driver should now be more robust in such situations so they will try again for turning on the automatic behavior, possibly for the upcoming Linux 5.4 cycle...

09:50

Saturday Morning Breakfast Cereal - Bat [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Do you think it's ever possible to truly know what it's like to be Thomas Nagel?


Today's News:

09:34

Google Open-Sources Live Transcribe's Speech Engine [Slashdot]

Friday Google open-sourced "the speech engine that powers its Android speech recognition transcription tool Live Transcribe," reports Venture Beat: The company hopes doing so will let any developer deliver captions for long-form conversations. The source code is available now on GitHub. Google released Live Transcribe in February. The tool uses machine learning algorithms to turn audio into real-time captions. Unlike Android's upcoming Live Caption feature, Live Transcribe is a full-screen experience, uses your smartphone's microphone (or an external microphone), and relies on the Google Cloud Speech API. Live Transcribe can caption real-time spoken words in over 70 languages and dialects. You can also type back into it — Live Transcribe is really a communication tool. The other main difference: Live Transcribe is available on 1.8 billion Android devices. (When Live Caption arrives later this year, it will only work on select Android Q devices.)

Read more of this story at Slashdot.

08:34

Researchers Find New State of Matter, Claim It Could Aid Quantum Computing and Data Storage [Slashdot]

"A team of physicists has uncovered a new state of matter -- a breakthrough that offers promise for increasing storage capabilities in electronic devices and enhancing quantum computing," according to an announcement from NYU: "Our research has succeeded in revealing experimental evidence for a new state of matter -- topological superconductivity," says Javad Shabani, an assistant professor of physics at New York University. "This new topological state can be manipulated in ways that could both speed calculation in quantum computing and boost storage...." In their research, Shabani and his colleagues analyzed a transition of quantum state from its conventional state to a new topological state, measuring the energy barrier between these states.... "The new discovery of topological superconductivity in a two-dimensional platform paves the way for building scalable topological qubits to not only store quantum information, but also to manipulate the quantum states that are free of error," observes Shabani. The research was funded, in part, by a grant from the U.S. Department of Defenseâ(TM)s Defense Advanced Research Projects Agency (DARPA).

Read more of this story at Slashdot.

05:30

Qt's Development Branch To Begin Forming Qt 6 [Phoronix]

Following the feature freeze and code branching for Qt 5.14, the Qt "Dev" branch will likely be shifting immediately to Qt 6 development. A Qt 5.15 release is still expected to happen before Qt 6.0, but that 5.15 milestone will likely just be a polished release derived from Qt 5.14...

05:22

Warfork Letting Warsow Live On Under Steam [Phoronix]

Going back a decade one of the interesting open-source FPS games of its time was Warsow. Development on Warsow has seemingly been tremulous over the past few years (edit: though the core developer has recently released a new beta) for this Qfusion (Quake 2 code base) engine powered game that started in 2005, but now there is Warfork as a fork of Warsow that is being developed and also available via Steam...

05:06

KDE Usability & Productivity Initiative Coming To An End [Phoronix]

The KDE Usability and Productivity Initiative to solve various problems in the KDE software stack to make it easier to use to more individuals and to do so more efficient will be coming to an end. But other KDE goals are being envisioned and the usability and productivity elements will continue to be worked on outside of this initiative...

Saturday, 17 August

22:12

Knoppix 8.6 Released - This Original Linux Live Distro Now Based On Debian Buster [Phoronix]

Knoppix 8.6 is out this weekend as the newest version for this one of the original Linux distributions supporting Live CD/DVD booting...

19:08

Vulkan 1.1.120 Released As The Newest Maintenance Release [Phoronix]

Vulkan 1.1.120 is out as the newest weekly update to the Vulkan graphics API...

14:52

Linux 5.3 Kernel Yielding The Best Performance Yet For AMD EPYC "Rome" CPU Performance [Phoronix]

Among many different Linux/open-source benchmarks being worked on for the AMD EPYC "Rome" processors now that our initial launch benchmarks are out of the way are Linux distribution comparisons, checking out the BSD compatibility, and more. Some tests I wrapped up this weekend were seeing how recent Linux kernel releases perform on the AMD EPYC 7742 64-core / 128-thread processors...

08:30

System76 Unveils Their Firmware Manager Project For Graphically Updating Firmware [Phoronix]

While most major hardware vendors have been adopting LVFS+Fwupd for firmware updating on Linux, Linux PC vendor System76 has notably been absent from the party for a variety of reasons. Today they announced their new Firmware Manager project that bridges the gap between their lack of LVFS support and their own hosted firmware service...

07:24

Git 2.23 Brings New Switch & Restore Sub-Commands [Phoronix]

Git 2.23 was released on Friday with more than 500 changes on top of the previous release...

07:18

Wine Staging 4.14 Carries 841 Patches Atop Upstream Wine [Phoronix]

Re-based against yesterday's Wine 4.14 release, Wine-Staging 4.14 is now available with nearly 850 extra patches...

06:34

Oracle Continues Working On eBPF Support For GCC 10 [Phoronix]

Back in May we wrote about Oracle's initial plans for introducing an eBPF back-end to GCC 10 to allow this GNU compiler to target this general purpose in-kernel virtual machine. Up to this point LLVM Clang has been the focused compiler for eBPF but those days are numbered with Oracle on Friday pushing out the newest GCC patches...

06:15

Saturday Morning Breakfast Cereal - Google [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Later, Amazon conquered Heaven and we got The Cloud.


Today's News:

06:08

Linux 5.4 To Expose What's Keeping The System Awake Via Sysfs [Phoronix]

The next Linux kernel version will expose the real-time sources of what's keeping the system awake via Sysfs compared to existing source information that previously was only available via DebugFS...

05:51

Unigine 2.9 Further Enhances Its Stunning Visuals [Phoronix]

It's a pity there doesn't seem to be any new adoption of Unigine as a game engine, but this visually impressive platform does continue seeing much success in the area of industrial simulations, professional VR platforms, and related areas. With Unigine 2.9 this Linux-friendly graphics engine is even more stunning...

Friday, 16 August

19:11

Wine 4.14 Released With The Latest Bits For Running Windows Games/Programs On Linux [Phoronix]

Wine 4.14 was released earlier today as the newest bi-weekly point release for running Windows games and applications on Linux and other operating systems...

15:58

Overstock's share price has plummeted. Is it Trump's trade war? Bad results? Nope, its CEO has gone bonkers... [The Register]

Just what is Patrick Byrne's role in the Deep State? He's here to tell you

Comment  How much of a company's value is tied up in its leadership?…

14:57

Chrome add-on warns netizens when they use a leaked password. Sometimes, they even bother to change it [The Register]

Alerted to exposed credentials, users do something about it roughly a quarter of the time

Between February and March this year, after Google released a Chrome extension called Password Checkup to check whether people's username and password combinations had been stolen and leaked from website databases, computer scientists at the biz and Stanford University gathered anonymous telemetry from 670,000 people who installed the add-on.…

14:09

NSA asks Congress to permanently reauthorize spying program that was so shambolic, the snoops had shut it down [The Register]

You never know, we might figure out how not to screw up in future

Analysis  In the clearest possible sign that the US intelligence services live within their own political bubble, the director of national intelligence has asked Congress to reauthorize a spying program that the NSA itself decided to shut down after it repeatedly – and illegally – gathered the call records of millions of innocent Americans.…

14:00

Dropbox would rather write code twice than try to make C++ work on both iOS and Android [The Register]

Write once, run anywhere? You must be joking

Dropbox has abandoned a longstanding technical strategy of sharing C++ code between its applications for iOS and Android, saying the overhead of writing code twice is less than the cost of making code-sharing work.…

13:00

Microsoft Surface users baffled after investing in kit that throttles itself to the point of passing out [The Register]

400MHz ought to be enough for anyone?

An intermittent but longstanding issue where Microsoft Surface Pro 6 and Surface Book 2 devices run super slow continues to frustrate users.…

12:45

Top tip: Don't upload your confidential biz files to free malware-scanning websites – everything is public [The Register]

Sandbox services are bursting with sensitive info from unwitting companies

Companies are inadvertently leaving confidential files on the internet for anyone to download – after uploading the documents to malware-scanning websites that make everything public.…

12:00

Gone in a flash: Oracle lays off hundreds as the biz formerly known as Pillar Data is shuttered [The Register]

The conference call equivalent of being taken round the back and...

Oracle is shuttering its flash storage division and laying off at least 300 employees, according to various sources.…

11:00

Alibaba: There's a trade war going on? Could've fooled us – just check out these swollen digits [The Register]

Cloud biz still dwarfed by retail but everything's up

Alibaba, China's nearest equivalent to Amazon, is weathering the "uncertain economic" landscape caused in part by the "trade war" between the US and Middle Kingdom governments.…

10:00

Data cops order Ireland to delete 3.2m records after ID card wheeze ruled to be 'unlawful' [The Register]

Splash one for GDPR

Ireland's Data Protection Commission (DPC) has ordered the country to delete 3.2 million people's personal data after ruling that its national ID card scheme was "unlawful from a data-processing point of view".…

09:30

And you thought the cops were bad... Civil rights group warns of facial recog 'epidemic' across UK private sites [The Register]

Shopping centres, museums and conference centres all found to be using tech

Facial recognition is being extensively deployed on privately owned sites across the UK, according to an investigation by civil liberties group Big Brother Watch.…

09:00

UK.gov opens £250k competition to tackle first-world problem of crap conference Wi-Fi [The Register]

Forget Vegas or Barcelona. Be 'gigabit-capable' in Blighty

Fiddling around with crap conference Wi-Fi is an occupational hazard for attendees. But today the UK government has dug deep to produce the princely sum of £250k to tackle this national problem.…

08:00

Apple fires legal salvo at Corellium claiming the virtual iPhone flinger is infringing copyright [The Register]

Good-faith security research tool or help for hackers? Both?

Apple has filed a copyright infringement complaint against Corellium, which provides virtual machines running iOS as a service to developers and security researchers.…

07:00

Yorkshire public sector procurement body YPO opens £400m framework for data centres, cloud hosting and security [The Register]

But how much will go to AWS?

Public sector procurement body Yorkshire Purchasing Organisation (YPO) has opened its £400m framework for data centres, cloud hosting and data security.…

06:54

Linux 5.4 Set To Remove Intel XScale IOP33X/IOP13XX CPU Support [Phoronix]

Linux 5.4 is set to remove the Intel IOP33X and IOP13XX series of processors that are part of the company's former XScale product line for ARM-based CPUs...

06:40

Saturday Morning Breakfast Cereal - Back [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I mean, statistically, shouldn't this be the most common outcome?


Today's News:

06:00

Fancy a career exposing cloud data leaks? Great news, companies are still largely clueless [The Register]

Unit 42 crew tours the cloud security hellscape, finds admins have learned nothing

Anyone hoping to halt the flood of data leaks stemming from cloud services got bad news this week when Palo Alto's Unit 42 found little sign companies were improving their security practices.…

05:34

Radeon Software for Linux 19.30 Updated With Ubuntu 18.04.3 LTS Support [Phoronix]

In addition to AMD releasing the Radeon Pro Software for Enterprise 19.Q3 Linux driver, they also quietly released a new Radeon Software Linux driver release for consumer GPUs...

05:12

Intel Volleys Another Batch Of Tiger Lake "Gen 12" Graphics Code [Phoronix]

While it remains to be seen if Tiger Lake will be able to ship on time in 2020 as the Icelake successor, the "Gen 12" Xe Graphics continue to be worked on with the company's open-source Linux graphics driver...

05:03

QEMU 4.1 Released With Many ARM, MIPS & x86 Additions [Phoronix]

QEMU 4.1 is now out as one of the important pieces to the open-source Linux virtualization stack...

05:00

UK.gov has £12m to help kick-start quantum techs that could be 'adopted at scale' – which is pretty niche, if we're honest [The Register]

Brave investors would have to match awards four times over

The UK government yesterday waved around some pocket change aimed at making forever-nearly-here quantum technologies a reality.…

04:33

Etnaviv Is Packing Code For An Exciting Linux 5.4 Cycle [Phoronix]

While Freedreno and Panfrost have been steaming ahead when it comes to open-source, reverse-engineered graphics for Arm SoCs, the Etnaviv project for targeting Vivante graphics hasn't had too much to report on recently. Fortunately, that's changing as coming up for the Linux 5.4 cycle they have a lot of new code to introduce...

04:24

Kdevops Aims To Assist In Linux Kernel Testing [Phoronix]

Luis Chamberlain has announced the first release of Kdevops as a Linux kernel development "DevOps" framework...

04:00

Astroboffins have spied the largest star that has gone supernova and it's breaking all the rules [The Register]

Back to the drawing board thanks to SN2016iet

Astronomers have stumbled across the strangest supernova left over from the death of a humongous star 200 times as massive as the Sun. It’s the largest known star to have ended its life in a supernova explosion yet.…

03:00

Criminal mastermind signed name as 'Thief' on receipts after buying stuff with stolen card [The Register]

Hello, can I speak to Rob please? Second name Ber

Criminologists have long known petty crooks to be dumber than law-abiding citizens. Throw in some Dunning-Kruger and you have the perfect storm of a moron who thinks they're a criminal mastermind.…

02:03

Police costs for Gatwick drone fiasco double to nearly £900k – and still no one's been charged [The Register]

Omnishambles just keeps on rolling and you're paying for it

Sussex Police's probe of the infamous London Gatwick airport drone fiasco of Christmas 2018 has doubled in cost to nearly £900,000 – and the bungling force still hasn't arrested the person or persons responsible.…

02:00

Cockpit and the evolution of the Web User Interface [Fedora Magazine]

Over 3 years ago the Fedora Magazine published an article entitled Cockpit: an overview. Since then, the interface has see some eye-catching changes. Today’s Cockpit is cleaner and the larger fonts makes better use of screen real-estate.

This article will go over some of the changes made to the UI. It will also explore some of the general tools available in the web interface to simplify those monotonous sysadmin tasks.

Cockpit installation

Cockpit can be installed using the dnf install cockpit command. This provides a minimal setup providing the basic tools required to use the interface.

Another option is to install the Headless Management group. This will install additional packages used to extend the usability of Cockpit. It includes extensions for NetworkManager, software packages, disk, and SELinux management.

Run the following commands to enable the web service on boot and open the firewall port:

$ sudo systemctl enable --now cockpit.socket
Created symlink /etc/systemd/system/sockets.target.wants/cockpit.socket -> /usr/lib/systemd/system/cockpit.socket

$ sudo firewall-cmd --permanent --add-service cockpit
success
$ sudo firewall-cmd --reload
success

Logging into the web interface

To access the web interface, open your favourite browser and enter the server’s domain name or IP in the address bar followed by the service port (9090). Because Cockpit uses HTTPS, the installation will create a self-signed certificate to encrypt passwords and other sensitive data. You can safely accept this certificate, or request a CA certificate from your sysadmin or a trusted source.

Once the certificate is accepted, the new and improved login screen will appear. Long-time users will notice the username and password fields have been moved to the top. In addition, the white background behind the credential fields immediately grabs the user’s attention.

A feature added to the login screen since the previous article is logging in with sudo privileges — if your account is a member of the wheel group. Check the box beside Reuse my password for privileged tasks to elevate your rights.

Another edition to the login screen is the option to connect to remote servers also running the Cockpit web service. Click Other Options and enter the host name or IP address of the remote machine to manage it from your local browser.

Home view

Right off the bat we get a basic overview of common system information. This includes the make and model of the machine, the operating system, if the system is up-to-date, and more.

Clicking the make/model of the system displays hardware information such as the BIOS/Firmware. It also includes details about the components as seen with lspci.

Clicking on any of the options to the right will display the details of that device. For example, the % of CPU cores option reveals details on how much is used by the user and the kernel. In addition, the Memory & Swap graph displays how much of the system’s memory is used, how much is cached, and how much of the swap partition active. The Disk I/O and Network Traffic graphs are linked to the Storage and Networking sections of Cockpit. These topics will be revisited in an upcoming article that explores the system tools in detail.

Secure Shell Keys and authentication

Because security is a key factor for sysadmins, Cockpit now has the option to view the machine’s MD5 and SHA256 key fingerprints. Clicking the Show fingerprints options reveals the server’s ECDSA, ED25519, and RSA fingerprint keys.

You can also add your own keys by clicking on your username in the top-right corner and selecting Authentication. Click on Add keys to validate the machine on other systems. You can also revoke your privileges in the Cockpit web service by clicking on the X button to the right.

Changing the host name and joining a domain

Changing the host name is a one-click solution from the home page. Click the host name currently displayed, and enter the new name in the Change Host Name box. One of the latest features is the option to provide a Pretty name.

Another feature added to Cockpit is the ability to connect to a directory server. Click Join a domain and a pop-up will appear requesting the domain address or name, organization unit (optional), and the domain admin’s credentials. The Domain Membership group provides all the packages required to join an LDAP server including FreeIPA, and the popular Active Directory.

To opt-out, click on the domain name followed by Leave Domain. A warning will appear explaining the changes that will occur once the system is no longer on the domain. To confirm click the red Leave Domain button.

Configuring NTP and system date and time

Using the command-line and editing config files definitely takes the cake when it comes to maximum tweaking. However, there are times when something more straightforward would suffice. With Cockpit, you have the option to set the system’s date and time manually or automatically using NTP. Once synchronized, the information icon on the right turns from red to blue. The icon will disappear if you manually set the date and time.

To change the timezone, type the continent and a list of cities will populate beneath.

Shutting down and restarting

You can easily shutdown and restart the server right from home screen in Cockpit. You can also delay the shutdown/reboot and send a message to warn users.

Configuring the performance profile

If the tuned and tuned-utils packages are installed, performance profiles can be changed from the main screen. By default it is set to a recommended profile. However, if the purpose of the server requires more performance, we can change the profile from Cockpit to suit those needs.

Terminal web console

A Linux sysadmin’s toolbox would be useless without access to a terminal. This allows admins to fine-tune the server beyond what’s available in Cockpit. With the addition of themes, admins can quickly adjust the text and background colours to suit their preference.

Also, if you type exit by mistake, click the Reset button in the top-right corner. This will provide a fresh screen with a flashing cursor.

Adding a remote server and the Dashboard overlay

The Headless Management group includes the Dashboard module (cockpit-dashboard). This provides an overview the of the CPU, memory, network, and disk performance in a real-time graph. Remote servers can also be added and managed through the same interface.

For example, to add a remote computer in Dashboard, click the + button. Enter the name or IP address of the server and select the colour of your choice. This helps to differentiate the stats of the servers in the graph. To switch between servers, click on the host name (as seen in the screen-cast below). To remove a server from the list, click the check-mark icon, then click the red trash icon. The example below demonstrates how Cockpit manages a remote machine named server02.local.lan.

Documentation and finding help

As always, the man pages are a great place to find documentation. A simple search in the command-line results with pages pertaining to different aspects of using and configuring the web service.

$ man -k cockpit
cockpit (1)          - Cockpit
cockpit-bridge (1)   - Cockpit Host Bridge
cockpit-desktop (1)  - Cockpit Desktop integration
cockpit-ws (8)       - Cockpit web service
cockpit.conf (5)     - Cockpit configuration file

The Fedora repository also has a package called cockpit-doc. The package’s description explains it best:

The Cockpit Deployment and Developer Guide shows sysadmins how to deploy Cockpit on their machines as well as helps developers who want to embed or extend Cockpit.

For more documentation visit https://cockpit-project.org/external/source/HACKING

Conclusion

This article only touches upon some of the main functions available in Cockpit. Managing storage devices, networking, user account, and software control will be covered in an upcoming article. In addition, optional extensions such as the 389 directory service, and the cockpit-ostree module used to handle packages in Fedora Silverblue.

The options continue to grow as more users adopt Cockpit. The interface is ideal for admins who want a light-weight interface to control their server(s).

What do you think about Cockpit? Share your experience and ideas in the comments below.

01:04

Security? We've heard of it! But why be a party pooper when there's printing to be done [The Register]

The boss that went rogue and cocked a snook at the corporate policy he wrote

On Call  With the gateway to the weekend upon us, it is time to crack open the On Call files once again to enjoy a tale from one of those brave engineers at the front line of the tech world.…

00:00

Just dying to get into neural networks? Need a start in deep learning? We can workshop it out [The Register]

Join us at MCubed and dive deep into practical, hands-on knowledge

Event  It’s great to see hear about the possibilities of machine learning, and AI, but to really understand their potential, nothing beats getting your hands dirty under the watchful eye of a bona fide expert.…

Thursday, 15 August

22:54

'Deeply concerned' UK privacy watchdog thrusts probe into King's Cross face-recognizing snoop cam brouhaha [The Register]

ICO wants to know if AI surveillance systems in central London are legal

The UK's privacy watchdog last night launched a probe into the use of facial-recognition technology in the busy King's Cross corner of central London.…

19:26

Oracle Linux 7 Update 7 Released [Phoronix]

Based on last week's release of Red Hat Enterprise Linux 7.7 is now Oracle Linux 7 Update 7 with many of the same changes...

18:59

Apple's WebKit techs declare privacy circumvention to be a security issue [The Register]

Bypass our tracking controls at your unspecified peril, warns maker of minor browser

Apple's WebKit team on Wednesday formalized the company's oft-repeated pro-privacy stance (provided you're not in China) by declaring that privacy-piercing browser code will be treated as a security abuse.…

17:44

Kaspersky and Trend Micro get patch bonanza after ID flaw and password manager holes spotted [The Register]

Quis custodiet ipsos custodes?

Kaspersky and Trend Micro have released updates to address vulnerabilities in their respective security tools.…

17:31

Salesforce takes the multi-signer DNSSEC ball and runs with it [The Register]

Extending DNS security protocol to multiple platforms takes root

A plan to expand the current DNSSEC security protocol to cover multiple DNS platforms has received the backing of Salesforce, with a first proof-of-concept implementation of the approach announced on Thursday.…

16:02

Truckers, prepare to lose your jobs as UPS buys into self-driving tech [The Register]

A human driver is still needed, for the moment at least

Package delivery giant UPS has invested in TuSimple, a self-driving startup based in San Diego, California, to develop autonomous trucks, the mega-corp announced on Thursday.…

15:00

'Hey Google, remind Greg the locks have been changed, and he should find a new place to live. Maybe ask his mistress?' [The Register]

Google Assistant can now send reminders to friends and family – this won't end well

Having failed to grasp the lesson of Microsoft's annoying animated Office assistant, Clippy – humans hate being hectored by software – Google has empowered its Assistant software to remind people to do things at the behest of another.…

13:56

Ohio state's top legal eagle just made it harder for the FBI, ICE, cops to snoop around its DMV DB for people's faces [The Register]

Reminder: They're not allowed to do that without permission

The Attorney General of Ohio has banned cops and the Feds from accessing the US state's database of drivers' license plates and faces until the officers and g-men receive adequate privacy compliance training.…

12:56

Cisco axes hundreds, shares tumble amid China cut-off – but we're winning the trade war, right? So much winning [The Register]

Small percentage of workforce but sign of the times

Cisco has laid off 500 programmers in its home state of California amid disappointing financial results and a sagging share price.…

12:00

Virtually all polled enterprises say they'll use SD-WAN in next two years. Do you know what it is? Let us fill you in [The Register]

SD-WAN, bam, thank you, ma'am

Backgrounder  Businesses relying on hybrid clouds need to be especially mindful of how they protect the sensitive data that flows between their on- and off-premises systems. Employees can be anywhere, using multiple devices (sometimes simultaneously) and any type of network (including public Wi-Fi) to access cloud services, all of which need to be secured against malware, unauthorized access and eavesdropping.…

10:36

We're not going Huawei even if you ban our 5G kit, Chinese firm tells UK [The Register]

Translation: they're in Blighty to stay and they know it

Huawei has reportedly boasted that it will continue investing in the UK even if the British government U-turns on allowing the Chinese company to supply critical 5G mobile network equipment.…

09:03

If bigger seats and nicer nosh in British Airways' First Class still aren't enough, would sir like to wear some VR goggles? [The Register]

Now you need only briefly see the cabin interior

Good news for well-heeled British Airways customers! Now you can transport yourself away from the carrier's aircraft interior through the wizardry of Virtual Reality.…

08:53

Saturday Morning Breakfast Cereal - Kill [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
They need to make Baby Not On Board signs, as a courtesy of people trying to drive recklessly in peace.


Today's News:

08:07

Don't let your dreams be dreams! Itty-bitty space shuttle to ride into orbit on a Vulcan Centaur [The Register]

That's a rocket by the way, not half-horse, half-Spock

Wannabe space station supplier Sierra Nevada Corporation (SNC) has selected ULA's Vulcan Centaur rocket to launch its Dream Chaser freighter in 2021.…

07:18

Radeon Pro Software for Enterprise 19.Q3 for Linux Released [Phoronix]

On Wednesday marked the release of AMD's Radeon Pro Software for Enterprise driver package for Windows and Linux...

07:12

Tariffs, don't like it. Rock the AFA, rock the AFA: NetApp's all-flash sales crash hits top-line stats [The Register]

It's true most blame put on China-linked woes, but it'll hire staff in s-a-a-a-a-a-a-ales

A sharp slowdown in enterprise customers' all-flash array purchases has sucker-punched NetApp, and though it is hiring more sales heads to fix this worry, things aren't forecast to get better anytime soon.…

07:00

AMDVLK 2019.Q3.4 Vulkan Driver Enables Atomic Optimizer For Navi [Phoronix]

AMD's official open-source Vulkan driver code had fallen off its roughly weekly code push / release cadence with not having a new release in nearly three weeks, but that changed today with the availability of AMDVLK 2019.Q3.4...

06:45

KDE Applications 19.08 Released With Dolphin Improvements, Better Konsole Tiling [Phoronix]

The KDE community has delivered the release of KDE Applications 19.08 as their newest feature updates to the core collection of KDE programs...

06:23

Bomb-hoaxing DoSer who targeted police in revenge was caught after Twitter taunts [The Register]

Mostly the public adversely affected

A young man who DoSed two British police forces' websites has been sentenced to 16 months in a young offenders' institution.…

06:03

Microsoft's Component Firmware Update Is Their Latest Short-Sighted Spec [Phoronix]

Microsoft's newest specification is the "Component Firmware Update" that they envision as a standard for OEMs/IHVs to be able to handle device firmware/microcode updating in a robust and secure manner. While nice in theory, the actual implementation has a number of issues that complicate the process and could quickly evolve into another troubling specification from Microsoft in the hardware space...

05:13

Guys, it's fine. Don't worry about randomers listening to your Skype convos. Microsoft has tweaked an FAQ a bit [The Register]

'Automated and manual' data processing – so humans, yeah?

Microsoft has responded to the furore over its use of humans to listen in on Skype and Cortana recordings by tweaking its privacy policy a bit.…

04:43

Intel SYCL Compiler/Runtimes Updated With Unified Shared Memory Support [Phoronix]

Intel has released a new version of their SYCL compiler and run-time code for single-source C++ programming and allowing offloaded computations to accelerators via OpenCL...

04:13

Now you see them... IBM made over 800 UK jobs vanish in 2018 despite improving fortunes [The Register]

Axe fell on sales, marketing and product development

One in 15 IBM jobs in the UK were rubbed out during calendar 2018 despite local financials returning to growth.…

03:01

Poor old Jupiter has had a rough childhood after getting a massive hit from a mega-Earth [The Register]

Jumpin' Jupiter smash, it's a gas, gas, gas

Jupiter may have started life as a dense rocky planet that only became more gas-like after a massive newborn planet smashed right into it 4.5 billion years ago, according to new research.…

02:20

AMD Bulldozer/Jaguar CPUs Will No Longer Advertise RdRand Support Under Linux [Phoronix]

Not directly related to the recent AMD Zen 2 BIOS update needed to fix an RdRand problem (though somewhat related in that the original systemd bug report for faulty AMD RdRand stems from these earlier CPUs), but AMD has now decided to no longer advertise RdRand support for Family 15h (Bulldozer) and Family 16h (Jaguar) processors under Linux...

02:01

Quick question, what the Hull? City khazi is a top UK tourist destination [The Register]

TripAdvisor said what now?

A Victorian public convenience in Hull has made Lonely Planet's list of the best 500 places to visit in the UK.…

01:08

How dodgy browser plugins, web scripts can silently rewrite that URL you were about to hit – and throw you into an internet wormhole [The Register]

Clickjacking code found on sites with 43 million daily visits total

Analysis  Clickjacking, which came to the attention of security types more than a decade ago, continues to thrive, despite defenses deployed since then by browser makers.…

00:29

World recoils in horror as smartphone maker accused of helping government snoops read encrypted texts, track device whereabouts [The Register]

Thinking US again? You'd be wrong

Comment  In a report that has left lawmakers across the globe reeling, the Wall Street Journal on Wednesday claimed a smartphone maker helped government officials in Uganda access encrypted texts on a handset used by one of its own citizens, and track the device's whereabouts.…

00:08

Oracle Is Working To Upstream More Of DTrace To The Linux Kernel & eBPF Implementation [Phoronix]

While DTrace prospects for the Linux kernel are no longer viewed as magical or groundbreaking as they once were more than a decade ago, Oracle continues to work on its DTrace port to Linux and extending its reach beyond just their "Unbreakable Enterprise Kernel" for their RHEL-cloned Oracle Linux. Oracle now says they are working towards upstreaming more work as well as getting an eBPF-based implementation for the kernel...

Wednesday, 14 August

19:41

Cloudflare Global Network Expands to 193 Cities [The Cloudflare Blog]

Cloudflare Global Network Expands to 193 Cities

Cloudflare’s global network currently spans 193 cities across 90+ countries. With over 20 million Internet properties on our network, we increase the security, performance, and reliability of large portions of the Internet every time we add a location.

Cloudflare Global Network Expands to 193 Cities

Expanding Network to New Cities

So far in 2019, we’ve added a score of new locations: Amman, Antananarivo*, Arica*, Asunción, Baku, Bengaluru, Buffalo, Casablanca, Córdoba*, Cork, Curitiba, Dakar*, Dar es Salaam, Fortaleza, Geneva, Göteborg, Guatemala City, Hyderabad, Kigali, Kolkata, Male*, Maputo, Nagpur, Neuquén*, Nicosia, Nouméa, Ottawa, Port-au-Prince, Porto Alegre, Querétaro, Ramallah, and Thessaloniki.

Our Humble Beginnings

When Cloudflare launched in 2010, we focused on putting servers at the Internet’s crossroads: large data centers with key connections, like the Amsterdam Internet Exchange and Equinix Ashburn. This not only provided the most value to the most people at once but was also easier to manage by keeping our servers in the same buildings as all the local ISPs, server providers, and other people they needed to talk to streamline our services.

This is a great approach for bootstrapping a global network, but we’re obsessed with speed in general. There are over five hundred cities in the world with over one million inhabitants, but only a handful of them have the kinds of major Internet exchanges that we targeted. Our goal as a company is to help make a better Internet for all, not just those lucky enough to live in areas with affordable and easily-accessible interconnection points. However, we ran up against two broad, nasty problems: a) running out of major Internet exchanges and b) latency still wasn’t as low as we wanted. Clearly, we had to start scaling in new ways.

One of our first big steps was entering into partnerships around the world with local ISPs, who have many of the same problems we do: ISPs want to save money and provide fast Internet to their customers, but they often don’t have a major Internet exchange nearby to connect to. Adding Cloudflare equipment to their infrastructure effectively brought more of the Internet closer to them. We help them speed up millions of Internet properties while reducing costs by serving traffic locally. Additionally, since all of our servers are designed to support all our products, a relatively small physical footprint can also provide security, performance, reliability, and more.

Upgrading Capacity in Existing Cities

Though it may be obvious and easy to overlook, continuing to build out existing locations is also a key facet of building a global network. This year, we have significantly increased the computational capacity at the edge of our network. Additionally, by making it easier to interconnect with Cloudflare, we have increased the number of unique networks directly connected with us to over 8,000. This makes for a faster, more reliable Internet experience for the >1 billion IPs that we see daily.

To make these capacity upgrades possible for our customers, efficient infrastructure deployment has been one of our keys to success. We want our infrastructure deployment to be targeted and flexible.

Targeted Deployment

The next Cloudflare customer through our door could be a small restaurant owner on a Pro plan with thousands of monthly pageviews or a fast-growing global tech company like Discord. As a result, we need to always stay one step ahead and synthesize a lot of data all at once for our customers.

To accommodate this expansion, our Capacity Planning team is learning new ways to optimize our servers. One key strategy is targeting exactly where to send our servers. However, staying on top of everything isn’t easy - we are a global anycast network, which introduces unpredictability as to where incoming traffic goes. To make things even more difficult, each city can contain as many as five distinct deployments. Planning isn’t just a question of what city to send servers to, it’s one of which address.

To make sense of it all, we tackle the problem with simulations. Some, but not all, of the variables we model include historical traffic growth rates, foreseeable anomalous spikes (e.g., Cyber Day in Chile), and consumption states from our live deal pipeline, as well as product costs, user growth, end-customer adoption. We also add in site reliability, potential for expansion, and expected regional expansion and partnerships, as well as strategic priorities and, of course, feedback from our fantastic Systems Reliability Engineers.

Flexible Supply Chain

Knowing where to send a server is only the first challenge of many when it comes to a global network. Just like our user base, our supply chain must span the entire world while also staying flexible enough to quickly react to time constraints, pricing changes including taxes and tariffs, import/export restrictions and required certifications - not to mention local partnerships many more dynamic location-specific variables. Even more reason we have to stay quick on our feet, there will always be unforeseen roadblocks and detours even in the most well-prepared plans. For example, a planned expansion in our Prague location might warrant an expanded presence in Vienna for failover.

Once servers arrive at our data centers, our Data Center Deployment and Technical Operations teams work with our vendors and on-site data center personnel (our “Remote Hands” and “Smart Hands”) to install the physical server, manage the cabling, and handle other early-stage provisioning processes.

Our architecture, which is designed so that every server can support every service, makes it easier to withstand hardware failures and efficiently load balance workloads between equipment and between locations.

Join Our Team

If working at a rapidly expanding, globally diverse company interests you, we’re hiring for scores of positions, including in the Infrastructure group. If you want to help increase hardware efficiency, deploy and maintain servers, work on our supply chain, or strengthen ISP partnerships, get in touch.

*Represents cities where we have data centers with active Internet ports and where we are configuring our servers to handle traffic for more customers (at the time of publishing)

18:37

Talk about keeping it in the family: Dell-owned Pivotal shares rocket after Dell-owned VMware mulls gobbling it up [The Register]

Stock price back up to, er, just below IPO level

Dell-owned VMware is in talks to acquire Dell-owned Pivotal Software, the hypervisor giant announced Wednesday.…

18:02

Cisc-o-no! 'We’re being uninvited to bid' on China deals admits CEO as Middle Kingdom snub freaks out investors [The Register]

Stock price dives as Wall St learns of trouble overseas and weak outlook

Cisco warned of problems on the horizon as it wrapped up it fiscal 2019 financial results [PDF].…

15:59

Intel: Listen up, you NUC-leheads! Mini PCs and compute sticks just got a major security fix [The Register]

Chipzilla patches firmware, drivers, SDKs

Hot on the heels of Patch Tuesday fixes from Microsoft, Apple, Adobe, and SAP, Intel has dropped its monthly security bundle to address a series of seven CVE-listed vulnerabilities in its firmware and software.…

15:06

Chin up, CapitalOne: You may not have been the suspected hacker's only victim. Feds fear 30-plus organizations hit [The Register]

Prosecutors file papers to keep Paige Thompson behind bars while awaiting trial

The ex-Amazon software engineer accused of stealing the personal information of 106 million people from Capital One's cloud-hosted databases may have hacked dozens of other organizations.…

13:56

WeWork filed its IPO homework. So we had a look at its small print and... yowser. What has El Reg got itself into? [The Register]

Authentic tech company vibes, right down to billions in losses and admission it 'may never be profitable'

Comment  WeWork, the office rental upstart that poses as some kind of tech startup incubation facility, has submitted the paperwork for its stock-market debut in the US – and its filings warn the biz “may never be profitable.”…

10:59

Saturday Morning Breakfast Cereal - Hack [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
No really, from now on it's all basically jokes I stole from Janelle Shane.


Today's News:

08:01

Blender 2.80 Performance With Intel Xeon Platinum 8280 vs. AMD EPYC 7742 [Phoronix]

The Blender 2.80 release arrived at the end of July that unfortunately was too late for using that big new release in our launch-day testing of AMD's EPYC 7002 "Rome" processors but as a follow-up here are AMD EPYC 7742 performance benchmarks up against the Intel Xeon Platinum 8280 Cascade Lake as well as the AMD EPYC 7601 2P. Blender 2.80 performance is the focus of this article along with some other renderer benchmarks.

02:00

Taz Brown: How Do You Fedora? [Fedora Magazine]

We recently interviewed Taz Brown on how she uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Taz Brown is a seasoned IT professional with over 15 years of experience. “I have worked as a systems administrator, senior Linux administrator, DevOps engineer and I now work as a senior Ansible automation consultant at Red Hat with the Automation Practice Team.” Originally Taz started using Ubuntu, but she started using CentOS, Red Hat Enterprise Linux and Fedora as a Linux administrator in the IT industry.

Taz is relatively new to contributing to open source, but she found that code was not the only way to contribute. “I prefer to contribute through documentation as I am not a software developer or engineer. I found that there was more than one way to contribute to open source than just through code.”

All about Taz

Her childhood hero is Wonder Woman. Her favorite movie is Hackers. “My favorite scene is the beginning of the movie,” Taz tells the Magazine. “The movie starts with a group of special agents breaking into a house to catch the infamous hacker, Zero Cool. We soon discover that Zero Cool is actually 11-year-old Dade Murphy, who managed to crash 1,507 computer systems in one day. He is charged for his crimes and his family is fined $45,000. Additionally, he is banned from using computers or touch-tone telephones until he is 18.”

Her favorite character in the movie is Paul Cook. “Paul Cook, Lord Nikon, played by Laurence Mason was my favorite character. One of the main reasons is that I never really saw a hacker movie that had characters that looked like me so I was fascinated by his portrayal. He was enigmatic. It was refreshing to see and it made me real proud that I was passionate about IT and that I was a geek of sorts.”

Taz is an amateur photographer and uses a Nikon D3500. “I definitely like vintage things so I am looking to add a new one to my collection soon.” She also enjoys 3D printing, and drawing. “I use open source tools in my hobbies such as Wekan, which is an open-source kanban utility.”

Taz Brown with Astronaut

The Fedora community

Taz first started using Linux about 8 years ago. “I started using Ubuntu and then graduated to Fedora and its community and I was hooked. I have been using Fedora now for about 5 years.”

When she became a Linux Administrator, Linux turned into a passion. “I was trying to find my way in terms of contributing to open source. I didn’t know where to go so I wondered if I could truly be an open source enthusiast and influencer because the community is so vast, but once I found a few people who embraced my interests and could show me the way, I was able to open up and ask questions and learn from the community.”

Taz first became involved with the Fedora community through her work as a Linux systems engineer while working at Mastercard. “My first impressions of the Fedora community was one of true collaboration, respect and sharing.”

When Brown talked about the Fedora Project she gave an excellent analogy. “America is an melting pot and that’s how I see open source projects like the Fedora Project. There is plenty of room for diverse contributions to the Fedora Project. There are so many ways in which to get and stay involved and there is also room for new ideas.”

When we asked Brown about what she would like to see improved in the Fedora community, she commented on making others more aware of the opportunities. “I wish those who are typically underrepresented in tech were more aware of the amazing commitment that the Fedora Project has to diversity and inclusion in open source and in the Fedora community.”

Next Taz had some advice for people looking to join the Fedora Community. “It’s a great decision and one that you likely will not regret joining. Fedora is a project with a very large supportive community and if you’re new to open source, it’s definitely a great place to start. There is a lot of cool stuff in Fedora. I believe there are limitless opportunities for The Fedora Project.”

What hardware?

Taz uses an Lenovo Thinkserver TS140 with 64 GB of ram, 4 1 TB SSDs and a 1 TB HD for data storage. The server is currently running Fedora 30. She also has a Synology NAS with 164 TB of storage using a RAID 5 configuration. Taz also has a Logitech MX Master and MX Master 2S. “For my keyboard, I use a Kinesis Advantage 2.” She also uses two 38 inch LG ultrawide curved monitors and a single 34 inch LG ultrawide monitor.

She owns a System76 laptop. “I use the 16.1-inch Oryx Pro by System76 with IPS Display with i7 processor with 6 cores and 12 threads.” It has 6 GB GDDR6 RTX 2060 w/ 1920 CUDA Cores and also 64 GB of DDR4 RAM and a total of 4 TB of SSD storage. “I love the way Fedora handles my peripherals and like my mouse and keyboard. Everything works seamlessly. Plug and play works as it should and performance never suffers.”

Amazing Monitor Setup

What software?

Brown is currently running Fedora 30. She has a variety of software in her everyday work flow. “I use Wekan, which is an open-source kanban, which I use to manage my engagements and projects. My favorite editor is Atom, though I use to use Sublime at one point in time.”

And as for terminals? “I use Terminator as my go-to terminal because of grid arrangement as well as it’s many keyboard shortcuts and its tab formation.” Taz continues, “I love using neofetch which comes up with a nifty fedora logo and system information every time I log in to the terminal. I also have my terminal pimped out using powerline and powerlevel9k and vim-powerline as well.”

Taz Brown screenshot of Linux terminal.

Tuesday, 13 August

21:21

Building a GraphQL server on the edge with Cloudflare Workers [The Cloudflare Blog]

Building a GraphQL server on the edge with Cloudflare Workers
Building a GraphQL server on the edge with Cloudflare Workers

Today, we're open-sourcing an exciting project that showcases the strengths of our Cloudflare Workers platform: workers-graphql-server is a batteries-included Apollo GraphQL server, designed to get you up and running quickly with GraphQL.

Building a GraphQL server on the edge with Cloudflare Workers
Testing GraphQL queries in the GraphQL Playground

As a full-stack developer, I’m really excited about GraphQL. I love building user interfaces with React, but as a project gets more complex, it can become really difficult to manage how your data is managed inside of an application. GraphQL makes that really easy - instead of having to recall the REST URL structure of your backend API, or remember when your backend server doesn't quite follow REST conventions - you just tell GraphQL what data you want, and it takes care of the rest.

Cloudflare Workers is uniquely suited as a platform to being an incredible place to host a GraphQL server. Because your code is running on Cloudflare's servers around the world, the average latency for your requests is extremely low, and by using Wrangler, our open-source command line tool for building and managing Workers projects, you can deploy new versions of your GraphQL server around the world within seconds.

If you'd like to try the GraphQL server, check out a demo GraphQL playground, deployed on Workers.dev. This optional add-on to the GraphQL server allows you to experiment with GraphQL queries and mutations, giving you a super powerful way to understand how to interface with your data, without having to hop into a codebase.

If you're ready to get started building your own GraphQL server with our new open-source project, we've added a new tutorial to our Workers documentation to help you get up and running - check it out here!

Finally, if you're interested in how the project works, or want to help contribute - it's open-source! We'd love to hear your feedback and see your contributions. Check out the project on GitHub.

11:00

On the recent HTTP/2 DoS attacks [The Cloudflare Blog]

On the recent HTTP/2 DoS attacks
On the recent HTTP/2 DoS attacks

Today, multiple Denial of Service (DoS) vulnerabilities were disclosed for a number of HTTP/2 server implementations. Cloudflare uses NGINX for HTTP/2. Customers using Cloudflare are already protected against these attacks.

The individual vulnerabilities, originally discovered by Netflix and are included in this announcement are:

As soon as we became aware of these vulnerabilities, Cloudflare’s Protocols team started working on fixing them. We first pushed a patch to detect any attack attempts and to see if any normal traffic would be affected by our mitigations. This was followed up with work to mitigate these vulnerabilities; we pushed the changes out few weeks ago and continue to monitor similar attacks on our stack.

If any of our customers host web services over HTTP/2 on an alternative, publicly accessible path that is not behind Cloudflare, we recommend you apply the latest security updates to your origin servers in order to protect yourselves from these HTTP/2 vulnerabilities.

We will soon follow up with more details on these vulnerabilities and how we mitigated them.

Full credit for the discovery of these vulnerabilities goes to Jonathan Looney of Netflix and Piotr Sikora of Google and the Envoy Security Team.

07:01

Magic Transit makes your network smarter, better, stronger, and cheaper to operate [The Cloudflare Blog]

Magic Transit makes your network smarter, better, stronger, and cheaper to operate

Today we’re excited to announce Cloudflare Magic Transit. Magic Transit provides secure, performant, and reliable IP connectivity to the Internet. Out-of-the-box, Magic Transit deployed in front of your on-premise network protects it from DDoS attack and enables provisioning of a full suite of virtual network functions, including advanced packet filtering, load balancing, and traffic management tools.

Magic Transit makes your network smarter, better, stronger, and cheaper to operate

Magic Transit is built on the standards and networking primitives you are familiar with, but delivered from Cloudflare’s global edge network as a service. Traffic is ingested by the Cloudflare Network with anycast and BGP, announcing your company’s IP address space and extending your network presence globally. Today, our anycast edge network spans 193 cities in more than 90 countries around the world.

Once packets hit our network, traffic is inspected for attacks, filtered, steered, accelerated, and sent onward to the origin. Magic Transit will connect back to your origin infrastructure over Generic Routing Encapsulation (GRE) tunnels, private network interconnects (PNI), or other forms of peering.

Enterprises are often forced to pick between performance and security when deploying IP network services. Magic Transit is designed from the ground up to minimize these trade-offs: performance and security are better together. Magic Transit deploys IP security services across our entire global network. This means no more diverting traffic to small numbers of distant “scrubbing centers” or relying on on-premise hardware to mitigate attacks on your infrastructure.

We’ve been laying the groundwork for Magic Transit for as long as Cloudflare has been in existence, since 2010. Scaling and securing the IP network Cloudflare is built on has required tooling that would have been impossible or exorbitantly expensive to buy. So we built the tools ourselves! We grew up in the age of software-defined networking and network function virtualization, and the principles behind these modern concepts run through everything we do.

When we talk to our customers managing on-premise networks, we consistently hear a few things: building and managing their networks is expensive and painful, and those on-premise networks aren’t going away anytime soon.

Traditionally, CIOs trying to connect their IP networks to the Internet do this in two steps:

  1. Source connectivity to the Internet from transit providers (ISPs).
  2. Purchase, operate, and maintain network function specific hardware appliances. Think hardware load balancers, firewalls, DDoS mitigation equipment, WAN optimization, and more.

Each of these boxes costs time and money to maintain, not to mention the skilled, expensive people required to properly run them. Each additional link in the chain makes a network harder to manage.

This all sounded familiar to us. We had an aha! moment: we had the same issues managing our datacenter networks that power all of our products, and we had spent significant time and effort building solutions to those problems. Now, nine years later, we had a robust set of tools we could turn into products for our own customers.

Magic Transit aims to bring the traditional datacenter hardware model into the cloud, packaging transit with all the network “hardware” you might need to keep your network fast, reliable, and secure. Once deployed, Magic Transit allows seamless provisioning of virtualized network functions, including routing, DDoS mitigation, firewalling, load balancing, and traffic acceleration services.

Magic Transit is your network’s on-ramp to the Internet

Magic Transit delivers its connectivity, security, and performance benefits by serving as the “front door” to your IP network. This means it accepts IP packets destined for your network, processes them, and then outputs them to your origin infrastructure.

Connecting to the Internet via Cloudflare offers numerous benefits. Starting with the most basic, Cloudflare is one of the most extensively connected networks on the Internet. We work with carriers, Internet exchanges, and peering partners around the world to ensure that a bit placed on our network will reach its destination quickly and reliably, no matter the destination.

An example deployment: Acme Corp

Let’s walk through how a customer might deploy Magic Transit. Customer Acme Corp. owns the IP prefix 203.0.113.0/24, which they use to address a rack of hardware they run in their own physical datacenter. Acme currently announces routes to the Internet from their customer-premise equipment (CPE, aka a router at the perimeter of their datacenter), telling the world 203.0.113.0/24 is reachable from their autonomous system number, AS64512. Acme has DDoS mitigation and firewall hardware appliances on-premise.

Magic Transit makes your network smarter, better, stronger, and cheaper to operate

Acme wants to connect to the Cloudflare Network to improve the security and performance of their own network. Specifically, they’ve been the target of distributed denial of service attacks, and want to sleep soundly at night without relying on on-premise hardware. This is where Cloudflare comes in.

Magic Transit makes your network smarter, better, stronger, and cheaper to operate

Deploying Magic Transit in front of their network is simple:

  1. Cloudflare uses Border Gateway Protocol (BGP) to announce Acme’s 203.0.113.0/24 prefix from Cloudflare’s edge, with Acme’s permission.
  2. Cloudflare begins ingesting packets destined for the Acme IP prefix.
  3. Magic Transit applies DDoS mitigation and firewall rules to the network traffic. After it is ingested by the Cloudflare network, traffic that would benefit from HTTPS caching and WAF inspection can be “upgraded” to our Layer 7 HTTPS pipeline without incurring additional network hops.
  4. Acme would like Cloudflare to use Generic Routing Encapsulation (GRE) to tunnel traffic back from the Cloudflare Network back to Acme’s datacenter. GRE tunnels are initiated from anycast endpoints back to Acme’s premise. Through the magic of anycast, the tunnels are constantly and simultaneously connected to hundreds of network locations, ensuring the tunnels are highly available and resilient to network failures that would bring down traditionally formed GRE tunnels.
  5. Cloudflare egresses packets bound for Acme over these GRE tunnels.

Let’s dive deeper on how the DDoS mitigation included in Magic Transit works.

Magic Transit protects networks from DDoS attack

Customers deploying Cloudflare Magic Transit instantly get access to the same IP-layer DDoS protection system that has protected the Cloudflare Network for the past 9 years. This is the same mitigation system that stopped a 942Gbps attack dead in its tracks, in seconds. This is the same mitigation system that knew how to stop memcached amplification attacks days before a 1.3Tbps attack took down Github, which did not have Cloudflare watching its back. This is the same mitigation we trust every day to protect Cloudflare, and now it protects your network.

Cloudflare has historically protected Layer 7 HTTP and HTTPS applications from attacks at all layers of the OSI Layer model. The DDoS protection our customers have come to know and love relies on a blend of techniques, but can be broken into a few complementary defenses:

  1. Anycast and a network presence in 193 cities around the world allows our network to get close to users and attackers, allowing us to soak up traffic close to the source without introducing significant latency.
  2. 30+Tbps of network capacity allows us to soak up a lot of traffic close to the source. Cloudflare's network has more capacity to stop DDoS attacks than that of Akamai Prolexic, Imperva, Neustar, and Radware — combined.
  3. Our HTTPS reverse proxy absorbs L3 (IP layer) and L4 (TCP layer) attacks by terminating connections and re-establishing them to the origin. This stops most spurious packet transmissions from ever getting close to a customer origin server.
  4. Layer 7 mitigations and rate limiting stop floods at the HTTPS application layer.

Looking at the above description carefully, you might notice something: our reverse proxy servers protect our customers by terminating connections, but our network and servers still get slammed by the L3 and 4 attacks we stop on behalf of our customers. How do we protect our own infrastructure from these attacks?

Enter Gatebot!

Gatebot is a suite of software running on every one of our servers inside each of our datacenters in the 193 cities we operate, constantly analyzing and blocking attack traffic. Part of Gatebot’s beauty is its simple architecture; it sits silently, in wait, sampling packets as they pass from the network card into the kernel and onward into userspace. Gatebot does not have a learning or warm-up period. As soon as it detects an attack, it instructs the kernel of the machine it is running on to drop the packet, log its decision, and move on.

Historically, if you wanted to protect your network from a DDoS attack, you might have purchased a specialized piece of hardware to sit at the perimeter of your network. This hardware box (let’s call it “The DDoS Protection Box”) would have been fantastically expensive, pretty to look at (as pretty as a 2U hardware box could get), and required a ton of recurring effort and money to stay on its feet, keep its licence up to date, and keep its attack detection system accurate and trained.

For one thing, it would have to be carefully monitored to make sure it was stopping attacks but not stopping legitimate traffic. For another, if an attacker managed to generate enough traffic to saturate your datacenter’s transit links to the Internet, you were out of luck; no box sitting inside your datacenter can protect you from an attack generating enough traffic to congest the links running from the outside world to the datacenter itself.

Early on, Cloudflare considered buying The DDoS Protection Box(es) to protect our various network locations, but ruled them out quickly. Buying hardware would have incurred substantial cost and complexity. In addition, buying, racking, and managing specialized pieces of hardware makes a network hard to scale. There had to be a better way. We set out to solve this problem ourselves, starting from first principles and modern technology.

To make our modern approach to DDoS mitigation work, we had to invent a suite of tools and techniques to allow us to do ultra-high performance networking on a generic x86 server running Linux.

At the core of our network data plane is the eXpress Data Path (XDP) and the extended Berkeley Packet Filter (eBPF), a set of APIs that allow us to build ultra-high performance networking applications in the Linux kernel. My colleagues have written extensively about how we use XDP and eBPF to stop DDoS attacks:

At the end of the day, we ended up with a DDoS mitigation system that:

  • Is delivered by our entire network, spread across 193 cities around the world. To put this another way, our network doesn’t have the concept of “scrubbing centers” — every single one of our network locations is always mitigating attacks, all the time. This means faster attack mitigation and minimal latency impact for your users.
  • Has exceptionally fast times to mitigate, with most attacks mitigated in 10s or less.
  • Was built in-house, giving us deep visibility into its behavior and the ability to rapidly develop new mitigations as we see new attack types.
  • Is deployed as a service, and is horizontally scalable. Adding x86 hardware running our DDoS mitigation software stack to a datacenter (or adding another network location) instantly brings more DDoS mitigation capacity online.

Gatebot is designed to protect Cloudflare infrastructure from attack. And today, as part of Magic Transit, customers operating their own IP networks and infrastructure can rely on Gatebot to protect their own network.

Magic Transit puts your network hardware in the cloud

We’ve covered how Cloudflare Magic Transit connects your network to the Internet, and how it protects you from DDoS attack. If you were running your network the old-fashioned way, this is where you’d stop to buy firewall hardware, and maybe another box to do load balancing.

With Magic Transit, you don’t need those boxes. We have a long track record of delivering common network functions (firewalls, load balancers, etc.) as services. Up until this point, customers deploying our services have relied on DNS to bring traffic to our edge, after which our Layer 3 (IP), Layer 4 (TCP & UDP), and Layer 7 (HTTP, HTTPS, and DNS) stacks take over and deliver performance and security to our customers.

Magic Transit is designed to handle your entire network, but does not enforce a one-size-fits-all approach to what services get applied to which portion of your traffic. To revisit Acme, our example customer from above, they have brought 203.0.113.0/24 to the Cloudflare Network. This represents 256 IPv4 addresses, some of which (eg 203.0.113.8/30) might front load balancers and HTTP servers, others mail servers, and others still custom UDP-based applications.

Each of these sub-ranges may have different security and traffic management requirements. Magic Transit allows you to configure specific IP addresses with their own suite of services, or apply the same configuration to large portions (or all) of your block.

Taking the above example, Acme may wish that the 203.0.113.8/30 block containing HTTP services fronted by a traditional hardware load balancer instead deploy the Cloudflare Load Balancer, and also wants HTTP traffic analyzed with Cloudflare’s WAF and content cached by our CDN. With Magic Transit, deploying these network functions is straight-forward — a few clicks in our dashboard or API calls will have your traffic handled at a higher layer of network abstraction, with all the attendant goodies applying application level load balancing, firewall, and caching logic bring.

This is just one example of a deployment customers might pursue. We’ve worked with several who just want pure IP passthrough, with DDoS mitigation applied to specific IP addresses. Want that? We got you!

Magic Transit runs on the entire Cloudflare Global Network. Or, no more scrubs!

When you connect your network to Cloudflare Magic Transit, you get access to the entire Cloudflare network. This means all of our network locations become your network locations. Our network capacity becomes your network capacity, at your disposal to power your experiences, deliver your content, and mitigate attacks on your infrastructure.

How expansive is the Cloudflare Network? We’re in 193 cities worldwide, with more than 30Tbps of network capacity spread across them. Cloudflare operates within 100 milliseconds of 98% of the Internet-connected population in the developed world, and 93% of the Internet-connected population globally (for context, the blink of an eye is 300-400 milliseconds).

Magic Transit makes your network smarter, better, stronger, and cheaper to operate
Areas of the globe within 100 milliseconds of a Cloudflare datacenter.

Just as we built our own products in house, we also built our network in house. Every product runs in every datacenter, meaning our entire network delivers all of our services. This might not have been the case if we had assembled our product portfolio piecemeal through acquisition, or not had completeness of vision when we set out to build our current suite of services.

The end result for customers of Magic Transit: a network presence around the globe as soon you come on board. Full access to a diverse set of services worldwide. All delivered with latency and performance in mind.

We'll be sharing a lot more technical detail on how we deliver Magic Transit in the coming weeks and months.

Magic Transit lowers total cost of ownership

Traditional network services don’t come cheap; they require high capital outlays up front, investment in staff to operate, and ongoing maintenance contracts to stay functional. Just as our product aims to be disruptive technically, we want to disrupt traditional network cost-structures as well.

Magic Transit is delivered and billed as a service. You pay for what you use, and can add services at any time. Your team will thank you for its ease of management; your management will thank you for its ease of accounting. That sounds pretty good to us!

Magic Transit is available today

We’ve worked hard over the past nine years to get our network, management tools, and network functions as a service into the state they’re in today. We’re excited to get the tools we use every day in customers’ hands.

So that brings us to naming. When we showed this to customers the most common word they used was ‘whoa.’ When we pressed what they meant by that they almost all said: ‘It’s so much better than any solution we’ve seen before. It’s, like, magic!’ So it seems only natural, if a bit cheesy, that we call this product what it is: Magic Transit.

We think this is all pretty magical, and think you will too. Contact our Enterprise Sales Team today.

07:00

Magic Transit: Network functions at Cloudflare scale [The Cloudflare Blog]

Magic Transit: Network functions at Cloudflare scale

Today we announced Cloudflare Magic Transit, which makes Cloudflare’s network available to any IP traffic on the Internet. Up until now, Cloudflare has primarily operated proxy services: our servers terminate HTTP, TCP, and UDP sessions with Internet users and pass that data through new sessions they create with origin servers. With Magic Transit, we are now also operating at the IP layer: in addition to terminating sessions, our servers are applying a suite of network functions (DoS mitigation, firewalling, routing, and so on) on a packet-by-packet basis.

Over the past nine years, we’ve built a robust, scalable global network that currently spans 193 cities in over 90 countries and is ever growing. All Cloudflare customers benefit from this scale thanks to two important techniques. The first is anycast networking. Cloudflare was an early adopter of anycast, using this routing technique to distribute Internet traffic across our data centers. It means that any data center can handle any customer’s traffic, and we can spin up new data centers without needing to acquire and provision new IP addresses. The second technique is homogeneous server architecture. Every server in each of our edge data centers is capable of running every task. We build our servers on commodity hardware, making it easy to quickly increase our processing capacity by adding new servers to existing data centers. Having no specialty hardware to depend on has also led us to develop an expertise in pushing the limits of what’s possible in networking using modern Linux kernel techniques.

Magic Transit is built on the same network using the same techniques, meaning our customers can now run their network functions at Cloudflare scale. Our fast, secure, reliable global edge becomes our customers’ edge. To explore how this works, let’s follow the journey of a packet from a user on the Internet to a Magic Transit customer’s network.

Putting our DoS mitigation to work… for you!

In the announcement blog post we describe an example deployment for Acme Corp. Let’s continue with this example here. When Acme brings their IP prefix 203.0.113.0/24 to Cloudflare, we start announcing that prefix to our transit providers, peers, and to Internet exchanges in each of our data centers around the globe. Additionally, Acme stops announcing the prefix to their own ISPs. This means that any IP packet on the Internet with a destination address within Acme’s prefix is delivered to a nearby Cloudflare data center, not to Acme’s router.

Let’s say I want to access Acme’s FTP server on 203.0.113.100 from my computer in Cloudflare’s office in Champaign, IL. My computer generates a TCP SYN packet with destination address 203.0.113.100 and sends it out to the Internet. Thanks to anycast, that packet ends up at Cloudflare’s data center in Chicago, which is the closest data center (in terms of Internet routing distance) to Champaign. The packet arrives on the data center’s router, which uses ECMP (Equal Cost Multi-Path) routing to select which server should handle the packet and dispatches the packet to the selected server.

Once at the server, the packet flows through our XDP- and iptables-based DoS detection and mitigation functions. If this TCP SYN packet were determined to be part of an attack, it would be dropped and that would be the end of it. Fortunately for me, the packet is permitted to pass.

So far, this looks exactly like any other traffic on Cloudflare’s network. Because of our expertise in running a global anycast network we’re able to attract Magic Transit customer traffic to every data center and apply the same DoS mitigation solution that has been protecting Cloudflare for years. Our DoS solution has handled some of the largest attacks ever recorded, including a 942Gbps SYN flood in 2018. Below is a screenshot of a recent SYN flood of 300M packets per second. Our architecture lets us scale to stop the largest attacks.

Magic Transit: Network functions at Cloudflare scale

Network namespaces for isolation and control

The above looked identical to how all other Cloudflare traffic is processed, but this is where the similarities end. For our other services, the TCP SYN packet would now be dispatched to a local proxy process (e.g. our nginx-based HTTP/S stack). For Magic Transit, we instead want to dynamically provision and apply customer-defined network functions like firewalls and routing. We needed a way to quickly spin up and configure these network functions while also providing inter-network isolation. For that, we turned to network namespaces.

Namespaces are a collection of Linux kernel features for creating lightweight virtual instances of system resources that can be shared among a group of processes. Namespaces are a fundamental building block for containerization in Linux. Notably, Docker is built on Linux namespaces. A network namespace is an isolated instance of the Linux network stack, including its own network interfaces (with their own eBPF hooks), routing tables, netfilter configuration, and so on. Network namespaces give us a low-cost mechanism to rapidly apply customer-defined network configurations in isolation, all with built-in Linux kernel features so there’s no performance hit from userspace packet forwarding or proxying.

When a new customer starts using Magic Transit, we create a brand new network namespace for that customer on every server across our edge network (did I mention that every server can run every task?). We built a daemon that runs on our servers and is responsible for managing these network namespaces and their configurations. This daemon is constantly reading configuration updates from Quicksilver, our globally distributed key-value store, and applying customer-defined configurations for firewalls, routing, etc, inside the customer’s namespace. For example, if Acme wants to provision a firewall rule to allow FTP traffic (TCP ports 20 and 21) to 203.0.113.100, that configuration is propagated globally through Quicksilver and the Magic Transit daemon applies the firewall rule by adding an nftables rule to the Acme customer namespace:

# Apply nftables rule inside Acme’s namespace
$ sudo ip netns exec acme_namespace nft add rule inet filter prerouting ip daddr 203.0.113.100 tcp dport 20-21 accept

Getting the customer’s traffic to their network namespace requires a little routing configuration in the default network namespace. When a network namespace is created, a pair of virtual ethernet (veth) interfaces is also created: one in the default namespace and one in the newly created namespace. This interface pair creates a “virtual wire” for delivering network traffic into and out of the new network namespace. In the default network namespace, we maintain a routing table that forwards Magic Transit customer IP prefixes to the veths corresponding to those customers’ namespaces. We use iptables to mark the packets that are destined for Magic Transit customer prefixes, and we have a routing rule that specifies that these specially marked packets should use the Magic Transit routing table.

(Why go to the trouble of marking packets in iptables and maintaining a separate routing table? Isolation. By keeping Magic Transit routing configurations separate we reduce the risk of accidentally modifying the default routing table in a way that affects how non-Magic Transit traffic flows through our edge.)

Magic Transit: Network functions at Cloudflare scale

Network namespaces provide a lightweight environment where a Magic Transit customer can run and manage network functions in isolation, letting us put full control in the customer’s hands.

GRE + anycast = magic

After passing through the edge network functions, the TCP SYN packet is finally ready to be delivered back to the customer’s network infrastructure. Because Acme Corp. does not have a network footprint in a colocation facility with Cloudflare, we need to deliver their network traffic over the public Internet.

This poses a problem. The destination address of the TCP SYN packet is 203.0.113.100, but the only network announcing the IP prefix 203.0.113.0/24 on the Internet is Cloudflare. This means that we can’t simply forward this packet out to the Internet—it will boomerang right back to us! In order to deliver this packet to Acme we need to use a technique called tunneling.

Tunneling is a method of carrying traffic from one network over another network. In our case, it involves encapsulating Acme’s IP packets inside of IP packets that can be delivered to Acme’s router over the Internet. There are a number of common tunneling protocols, but Generic Routing Encapsulation (GRE) is often used for its simplicity and widespread vendor support.

GRE tunnel endpoints are configured both on Cloudflare’s servers (inside of Acme’s network namespace) and on Acme’s router. Cloudflare servers then encapsulate IP packets destined for 203.0.113.0/24 inside of IP packets destined for a publicly-routable IP address for Acme’s router, which decapsulates the packets and emits them into Acme’s internal network.

Magic Transit: Network functions at Cloudflare scale

Now, I’ve omitted an important detail in the diagram above: the IP address of Cloudflare’s side of the GRE tunnel. Configuring a GRE tunnel requires specifying an IP address for each side, and the outer IP header for packets sent over the tunnel must use these specific addresses. But Cloudflare has thousands of servers, each of which may need to deliver packets to the customer through a tunnel. So how many Cloudflare IP addresses (and GRE tunnels) does the customer need to talk to? The answer: just one, thanks to the magic of anycast.

Cloudflare uses anycast IP addresses for our GRE tunnel endpoints, meaning that any server in any data center is capable of encapsulating and decapsulating packets for the same GRE tunnel. How is this possible? Isn’t a tunnel a point-to-point link? The GRE protocol itself is stateless—each packet is processed independently and without requiring any negotiation or coordination between tunnel endpoints. While the tunnel is technically bound to an IP address it need not be bound to a specific device. Any device that can strip off the outer headers and then route the inner packet can handle any GRE packet sent over the tunnel. Actually, in the context of anycast the term “tunnel” is misleading since it implies a link between two fixed points. With Cloudflare’s Anycast GRE, a single “tunnel” gives you a conduit to every server in every data center on Cloudflare’s global edge.

Magic Transit: Network functions at Cloudflare scale

One very powerful consequence of Anycast GRE is that it eliminates single points of failure. Traditionally, GRE-over-Internet can be problematic because an Internet outage between the two GRE endpoints fully breaks the “tunnel”. This means reliable data delivery requires going through the headache of setting up and maintaining redundant GRE tunnels terminating at different physical sites and rerouting traffic when one of the tunnels breaks. But because Cloudflare is encapsulating and delivering customer traffic from every server in every data center, there is no single “tunnel” to break. This means Magic Transit customers can enjoy the redundancy and reliability of terminating tunnels at multiple physical sites while only setting up and maintaining a single GRE endpoint, making their jobs simpler.

Our scale is now your scale

Magic Transit is a powerful new way to deploy network functions at scale. We’re not just giving you a virtual instance, we’re giving you a global virtual edge. Magic Transit takes the hardware appliances you would typically rack in your on-prem network and distributes them across every server in every data center in Cloudflare’s network. This gives you access to our global anycast network, our fleet of servers capable of running your tasks, and our engineering expertise building fast, reliable, secure networks. Our scale is now your scale.

06:50

Saturday Morning Breakfast Cereal - AI [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Ever since I got an early copy of Janelle Shane's new book, 80% of my jokes have been about douchey uses of AI.


Today's News:

Just leaving a note here that if you're interested in a pro-immigration fiscal case, that's chapter 3 of the new comic!

Monday, 12 August

08:18

Saturday Morning Breakfast Cereal - Prank [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
It gets really funny when he unhinges his jaw and starts enveloping and slowly digesting onlookers.


Today's News:

Sunday, 11 August

08:18

Saturday Morning Breakfast Cereal - Treasure [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
When someone starts in about the greatest treasure, you know you're about to be screwed.


Today's News:

Saturday, 10 August

Friday, 09 August

10:23

Saturday Morning Breakfast Cereal - Video Games [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
All those kids who grew up on Pac-Man now think it's okay to eat ghosts.


Today's News:

10:01

The Serverlist: Building out the SHAMstack [The Cloudflare Blog]

The Serverlist: Building out the SHAMstack

Check out our seventh edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.

Sign up below to have The Serverlist sent directly to your mailbox.

02:00

Use a drop-down terminal for fast commands in Fedora [Fedora Magazine]

A drop-down terminal lets you tap a key and quickly enter any command on your desktop. Often it creates a terminal in a smooth way, sometimes with effects. This article demonstrates how it helps to improve and speed up daily tasks, using drop-down terminals like Yakuake, Tilda, Guake and a GNOME extension.

Yakuake

Yakuake is a drop-down terminal emulator based on KDE Konsole techonology. It is distributed under the terms of the GNU GPL Version 2. It includes features such as:

  • Smoothly rolls down from the top of your screen
  • Tabbed interface
  • Configurable dimensions and animation speed
  • Skinnable
  • Sophisticated D-Bus interface

To install Yakuake, use the following command:

$ sudo dnf install -y yakuake

Startup and configuration

If you’re runnign KDE, open the System Settings and go to Startup and Shutdown. Add yakuake to the list of programs under Autostart, like this:

It’s easy to configure Yakuake while running the app. To begin, launch the program at the command line:

$ yakuake &

The following welcome dialog appears. You can set a new keyboard shortcut if the standard one conflicts with another keystroke you already use:

Now click the menu button, and the following help menu appears. Next, select Configure Yakuake… to access the configuration options.

You can customize the options for appearance, such as opacity; behavior, such as focusing terminals when the mouse pointer is moved over them; and window, such as size and animation. In the window options you’ll find one of the most useful options is you use two or more monitors: Open on screen: At mouse location.

Using Yakuake

The main shortcuts are:

  • F12 = Open/Retract Yakuake
  • Ctrl+F11 = Full Screen Mode
  • Ctrl+) = Split Top/Bottom
  • Ctrl+( = Split Left/Right
  • Ctrl+Shift+T = New Session
  • Shift+Right = Next Session
  • Shift+Left = Previous Session
  • Ctrl+Alt+S = Rename Session

Below is an example of Yakuake being used to split the session like a terminal multiplexer. Using this feature, you can run several shells in one session.

Tilda

Tilda is a drop-down terminal that compares with other popular terminal emulators such as GNOME Terminal, KDE’s Konsole, xterm, and many others.

It features a highly configurable interface. You can even change options such as the terminal size and animation speed. Tilda also lets you enable hotkeys you can bind to commands and operations.

To install Tilda, run this command:

$ sudo dnf install -y tilda

Startup and configuration

Most users prefer to have a drop-down terminal available behind the scenes when they login. To set this option, first go to the app launcher in your desktop, search for Tilda, and open it.

Next, open up the Tilda Config window. Select Start Tilda hidden, which means it will not display a terminal immediately when started.

Next, you’ll set your desktop to start Tilda automatically. If you’re using KDE, go to System Settings > Startup and Shutdown > Autostart and use Add a Program.

If you’re using GNOME, you can run this command in a terminal:

$ ln -s /usr/share/applications/tilda.desktop ~/.config/autostart/

When you run for the first time, a wizard shows up to set your preferences. If you need to change something, right click and go to Preferences in the menu.

You can also create multiple configuration files, and bind other keys to open new terminals at different places on the screen. To do that, run this command:

$ tilda -C

Every time you use the above command, Tilda creates a new config file located in the ~/.config/tilda/ folder called config_0, config_1, and so on. You can then map a key combination to open a new Tilda terminal with a specific set of options.

Using Tilda

The main shortcuts are:

  • F1 = Pull Down Terminal Tilda (Note: If you have more than one config file, the shortcuts are the same, with a diferent open/retract shortcut like F1, F2, F3, and so on)
  • F11 = Full Screen Mode
  • F12 = Toggle Transparency
  • Ctrl+Shift+T = Add Tab
  • Ctrl+Page Up = Go to Next Tab
  • Ctrl+Page Down = Go to Previous Tab

GNOME Extension

The Drop-down Terminal GNOME Extension lets you use this useful tool in your GNOME Shell. It is easy to install and configure, and gives you fast access to a terminal session.

Installation

Open a browser and go to the site for this GNOME extension. Enable the extension setting to On, as shown here:

Then select Install to install the extension on your system.

Once you do this, there’s no reason to set any autostart options. The extension will automatically run whenever you login to GNOME!

Configuration

After install, the Drop Down Terminal configuration window opens to set your preferences. For example, you can set the size of the terminal, animation, transparency, and scrollbar use.

If you need change some preferences in the future, run the gnome-shell-extension-prefs command and choose Drop Down Terminal.

Using the extension

The shortcuts are simple:

  • ` (usually the key above Tab) = Open/Retract Terminal
  • F12 (customize as you prefer) = Open/Retract Terminal

Thursday, 08 August

16:00

Introducing Certificate Transparency Monitoring [The Cloudflare Blog]

Introducing Certificate Transparency Monitoring
Introducing Certificate Transparency Monitoring

Today we’re launching Certificate Transparency Monitoring (my summer project as an intern!) to help customers spot malicious certificates. If you opt into CT Monitoring, we’ll send you an email whenever a certificate is issued for one of your domains. We crawl all public logs to find these certificates quickly. CT Monitoring is available now in public beta and can be enabled in the Crypto Tab of the Cloudflare dashboard.

Background

Most web browsers include a lock icon in the address bar. This icon is actually a button — if you’re a security advocate or a compulsive clicker (I’m both), you’ve probably clicked it before! Here’s what happens when you do just that in Google Chrome:

Introducing Certificate Transparency Monitoring

This seems like good news. The Cloudflare blog has presented a valid certificate, your data is private, and everything is secure. But what does this actually mean?

Certificates

Your browser is performing some behind-the-scenes work to keep you safe. When you request a website (say, cloudflare.com), the website should present a certificate that proves its identity. This certificate is like a stamp of approval: it says that your connection is secure. In other words, the certificate proves that content was not intercepted or modified while in transit to you. An altered Cloudflare site would be problematic, especially if it looked like the actual Cloudflare site. Certificates protect us by including information about websites and their owners.

We pass around these certificates because the honor system doesn’t work on the Internet. If you want a certificate for your own website, just request one from a Certificate Authority (CA), or sign up for Cloudflare and we’ll do it for you! CAs issue certificates just as real-life notaries stamp legal documents. They confirm your identity, look over some data, and use their special status to grant you a digital certificate. Popular CAs include DigiCert, Let’s Encrypt, and Sectigo. This system has served us well because it has kept imposters in check, but also promoted trust between domain owners and their visitors.

Introducing Certificate Transparency Monitoring

Unfortunately, nothing is perfect.

It turns out that CAs make mistakes. In rare cases, they become reckless. When this happens, illegitimate certificates are issued (even though they appear to be authentic). If a CA accidentally issues a certificate for your website, but you did not request the certificate, you have a problem. Whoever received the certificate might be able to:

  1. Steal login credentials from your visitors.
  2. Interrupt your usual services by serving different content.

These attacks do happen, so there’s good reason to care about certificates. More often, domain owners lose track of their certificates and panic when they discover unexpected certificates. We need a way to prevent these situations from ruining the entire system.

Certificate Transparency

Ah, Certificate Transparency (CT). CT solves the problem I just described by making all certificates public and easy to audit. When CAs issue certificates, they must submit certificates to at least two “public logs.” This means that collectively, the logs carry important data about all trusted certificates on the Internet. Several companies offer CT logs — Google has launched a few of its own. We announced Cloudflare's Nimbus log last year.

Logs are really, really big, and often hold hundreds of millions of certificate records.

Introducing Certificate Transparency Monitoring

The log infrastructure helps browsers validate websites’ identities. When you request cloudflare.com in Safari or Google Chrome, the browser will actually require Cloudflare’s certificate to be registered in a CT log. If the certificate isn’t found in a log, you won’t see the lock icon next to the address bar. Instead, the browser will tell you that the website you’re trying to access is not secure. Are you going to visit a website marked “NOT SECURE”? Probably not.

There are systems that audit CT logs and report illegitimate certificates. Therefore, if your browser finds a valid certificate that is also trusted in a log, everything is secure.

What We're Announcing Today

Cloudflare has been an industry leader in CT. In addition to Nimbus, we launched a CT dashboard called Merkle Town and explained how we made it. Today, we’re releasing a public beta of Certificate Transparency Monitoring.

If you opt into CT Monitoring, we’ll send you an email whenever a certificate is issued for one of your domains. When you get an alert, don’t panic; we err on the side of caution by sending alerts whenever a possible domain match is found. Sometimes you may notice a suspicious certificate. Maybe you won’t recognize the issuer, or the subdomain is not one you offer (e.g. slowinternet.cloudflare.com). Alerts are sent quickly so you can contact a CA if something seems wrong.

Introducing Certificate Transparency Monitoring

This raises the question: if services already audit public logs, why are alerts necessary? Shouldn’t errors be found automatically? Well no, because auditing is not exhaustive. The best person to audit your certificates is you. You know your website. You know your personal information. Cloudflare will put relevant certificates right in front of you.

You can enable CT Monitoring on the Cloudflare dashboard. Just head over to the Crypto Tab and find the “Certificate Transparency Monitoring” card. You can always turn the feature off if you’re too popular in the CT world.

Introducing Certificate Transparency Monitoring

If you’re on a Business or Enterprise plan, you can tell us who to notify. Instead of emailing the zone owner (which we do for Free and Pro customers), we accept up to 10 email addresses as alert recipients. We do this to avoid overwhelming large teams. These emails do not have to be tied to a Cloudflare account and can be manually added or removed at any time.

Introducing Certificate Transparency Monitoring

How This Actually Works

Our Cryptography and SSL teams worked hard to make this happen; they built on the work of some clever tools mentioned earlier:

  • Merkle Town is a hub for CT data. We process all trusted certificates and present relevant statistics on our website. This means that every certificate issued on the Internet passes through Cloudflare, and all the data is public (so no privacy concerns here).
  • Cloudflare Nimbus is our very own CT log. It contains more than 400 million certificates.

Introducing Certificate Transparency Monitoring
Note: Cloudflare, Google, and DigiCert are not the only CT log providers.

So here’s the process... At some point in time, you (or an impostor) request a certificate for your website. A Certificate Authority approves the request and issues the certificate. Within 24 hours, the CA sends this certificate to a set of CT logs. This is where we come in: Cloudflare uses an internal process known as “The Crawler” to look through millions of certificate records. Merkle Town dispatches The Crawler to monitor CT logs and check for new certificates. When The Crawler finds a new certificate, it pulls the entire certificate through Merkle Town.

Introducing Certificate Transparency Monitoring

When we process the certificate in Merkle Town, we also check it against a list of monitored domains. If you have CT Monitoring enabled, we’ll send you an alert immediately. This is only possible because of Merkle Town’s existing infrastructure. Also, The Crawler is ridiculously fast.

Introducing Certificate Transparency Monitoring

I Got a Certificate Alert. What Now?

Good question. Most of the time, certificate alerts are routine. Certificates expire and renew on a regular basis, so it’s totally normal to get these emails. If everything looks correct (the issuer, your domain name, etc.), go ahead and toss that email in the trash.

In rare cases, you might get an email that looks suspicious. We provide a detailed support article that will help. The basic protocol is this:

  1. Contact the CA (listed as “Issuer” in the email).
  2. Explain why you think the certificate is suspicious.
  3. The CA should revoke the certificate (if it really is malicious).

We also have a friendly support team that can be reached here. While Cloudflare is not at CA and cannot revoke certificates, our support team knows quite a bit about certificate management and is ready to help.

The Future

Introducing Certificate Transparency Monitoring

Certificate Transparency has started making regular appearances on the Cloudflare blog. Why? It’s required by Chrome and Safari, which dominate the browser market and set precedents for Internet security. But more importantly, CT can help us spot malicious certificates before they are used in attacks. This is why we will continue to refine and improve our certificate detection methods.

What are you waiting for? Go enable Certificate Transparency Monitoring!

04:40

Saturday Morning Breakfast Cereal - Flat [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I call together this meeting of the Slightly Lumpy Oblate Spheroid Earth society, which *technically* isn't a meeting because I'm the only one here.


Today's News:

Wednesday, 07 August

06:24

Saturday Morning Breakfast Cereal - Fish [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Before you send an angry email, I'll point out that my first draft was an extended joke about torturing a fish with a human brain.


Today's News:

02:00

Trace code in Fedora with bpftrace [Fedora Magazine]

bpftrace is a new eBPF-based tracing tool that was first included in Fedora 28. It was developed by Brendan Gregg, Alastair Robertson and Matheus Marchini with the help of a loosely-knit team of hackers across the Net. A tracing tool lets you analyze what a system is doing behind the curtain. It tells you which functions in code are being called, with which arguments, how many times, and so on.

This article covers some basics about bpftrace, and how it works. Read on for more information and some useful examples.

eBPF (extended Berkeley Packet Filter)

eBPF is a tiny virtual machine, or a virtual CPU to be more precise, in the Linux Kernel. The eBPF can load and run small programs in a safe and controlled way in kernel space. This makes it safer to use, even in production systems. This virtual machine has its own instruction set architecture (ISA) resembling a subset of modern processor architectures. The ISA makes it easy to translate those programs to the real hardware. The kernel performs just-in-time translation to native code for main architectures to improve the performance.

The eBPF virtual machine allows the kernel to be extended programmatically. Nowadays several kernel subsystems take advantage of this new powerful Linux Kernel capability. Examples include networking, seccomp, tracing, and more. The main idea is to attach eBPF programs into specific code points, and thereby extend the original kernel behavior.

eBPF machine language is very powerful. But writing code directly in it is extremely painful, because it’s a low level language. This is where bpftrace comes in. It provides a high-level language to write eBPF tracing scripts. The tool then translates these scripts to eBPF with the help of clang/LLVM libraries, and then attached to the specified code points.

Installation and quick start

To install bpftrace, run the following command in a terminal using sudo:

$ sudo dnf install bpftrace

Try it out with a “hello world” example:

$ sudo bpftrace -e 'BEGIN { printf("hello world\n"); }'

Note that you must run bpftrace as root due to the privileges required. Use the -e option to specify a program, and to construct the so-called “one-liners.” This example only prints hello world, and then waits for you to press Ctrl+C.

BEGIN is a special probe name that fires only once at the beginning of execution. Every action inside the curly braces { } fires whenever the probe is hit — in this case, it’s just a printf.

Let’s jump now to a more useful example:

$ sudo bpftrace -e 't:syscalls:sys_enter_execve { printf("%s called %s\n", comm, str(args->filename)); }'

This example prints the parent process name (comm) and the name of every new process being created in the system. t:syscalls:sys_enter_execve is a kernel tracepoint. It’s a shorthand for tracepoint:syscalls:sys_enter_execve, but both forms can be used. The next section shows you how to list all available tracepoints.

comm is a bpftrace builtin that represents the process name. filename is a field of the t:syscalls:sys_enter_execve tracepoint. You can access these fields through the args builtin.

All available fields of the tracepoint can be listed with this command:

bpftrace -lv "t:syscalls:sys_enter_execve"

Example usage

Listing probes

A central concept for bpftrace are probe points. Probe points are instrumentation points in code (kernel or userspace) where eBPF programs can be attached. They fit into the following categories:

  • kprobe – kernel function start
  • kretprobe – kernel function return
  • uprobe – user-level function start
  • uretprobe – user-level function return
  • tracepoint – kernel static tracepoints
  • usdt – user-level static tracepoints
  • profile – timed sampling
  • interval – timed output
  • software – kernel software events
  • hardware – processor-level events

All available kprobe/kretprobe, tracepoints, software and hardware probes can be listed with this command:

$ sudo bpftrace -l

The uprobe/uretprobe and usdt probes are userspace probes specific to a given executable. To use them, use the special syntax shown later in this article.

The profile and interval probes fire at fixed time intervals. Fixed time intervals are not covered in this article.

Counting system calls

Maps are special BPF data types that store counts, statistics, and histograms. You can use maps to summarize how many times each syscall is being called:

$ sudo bpftrace -e 't:syscalls:sys_enter_* { @[probe] = count(); }'

Some probe types allow wildcards to match multiple probes. You can also specify multiple attach points for an action block using a comma separated list. In this example, the action block attaches to all tracepoints whose name starts with t:syscalls:sys_enter_, which means all available syscalls.

The bpftrace builtin function count() counts the number of times this function is called. @[] represents a map (an associative array). The key of this map is probe, which is another bpftrace builtin that represents the full probe name.

Here, the same action block is attached to every syscall. Then, each time a syscall is called the map will be updated, and the entry is incremented in the map relative to this same syscall. When the program terminates, it automatically prints out all declared maps.

This example counts the syscalls called globally, it’s also possible to filter for a specific process by PID using the bpftrace filter syntax:

$ sudo bpftrace -e 't:syscalls:sys_enter_* / pid == 1234 / { @[probe] = count(); }'

Write bytes by process

Using these concepts, let’s analyze how many bytes each process is writing:

$ sudo bpftrace -e 't:syscalls:sys_exit_write /args->ret > 0/ { @[comm] = sum(args->ret); }'

bpftrace attaches the action block to the write syscall return probe (t:syscalls:sys_exit_write). Then, it uses a filter to discard the negative values, which are error codes (/args->ret > 0/).

The map key comm represents the process name that called the syscall. The sum() builtin function accumulates the number of bytes written for each map entry or process. args is a bpftrace builtin to access tracepoint’s arguments and return values. Finally, if successful, the write syscall returns the number of written bytes. args->ret provides access to the bytes.

Read size distribution by process (histogram):

bpftrace supports the creation of histograms. Let’s analyze one example that creates a histogram of the read size distribution by process:

$ sudo bpftrace -e 't:syscalls:sys_exit_read { @[comm] = hist(args->ret); }'

Histograms are BPF maps, so they must always be attributed to a map (@). In this example, the map key is comm.

The example makes bpftrace generate one histogram for every process that calls the read syscall. To generate just one global histogram, attribute the hist() function just to ‘@’ (without any key).

bpftrace automatically prints out declared histograms when the program terminates. The value used as base for the histogram creation is the number of read bytes, found through args->ret.

Tracing userspace programs

You can also trace userspace programs with uprobes/uretprobes and USDT (User-level Statically Defined Tracing). The next example uses a uretprobe, which probes to the end of a user-level function. It gets the command lines issued in every bash running in the system:

$ sudo bpftrace -e 'uretprobe:/bin/bash:readline { printf("readline: \"%s\"\n", str(retval)); }'

To list all available uprobes/uretprobes of the bash executable, run this command:

$ sudo bpftrace -l "uprobe:/bin/bash"

uprobe instruments the beginning of a user-level function’s execution, and uretprobe instruments the end (its return). readline() is a function of /bin/bash, and it returns the typed command line. retval is the return value for the instrumented function, and can only be accessed on uretprobe.

When using uprobes, you can access arguments with arg0..argN. A str() call is necessary to turn the char * pointer to a string.

Shipped Scripts

There are many useful scripts shipped with bpftrace package. You can find them in the /usr/share/bpftrace/tools/ directory.

Among them, you can find:

  • killsnoop.bt – Trace signals issued by the kill() syscall.
  • tcpconnect.bt – Trace all TCP network connections.
  • pidpersec.bt – Count new procesess (via fork) per second.
  • opensnoop.bt – Trace open() syscalls.
  • vfsstat.bt – Count some VFS calls, with per-second summaries.

You can directly use the scripts. For example:

$ sudo /usr/share/bpftrace/tools/killsnoop.bt

You can also study these scripts as you create new tools.

Links


Photo by Roman Romashov on Unsplash.

Tuesday, 06 August

13:32

Code as Craft: Understand the role of Style in e-commerce shopping [Code as Craft]

Aesthetic style is key to many purchasing decisions. When considering an item for purchase, buyers need to be aligned not only with the functional aspects (e.g. description, category, ratings) of an item’s specification, but also its aesthetic aspects (e.g. modern, classical, retro) as well. Style is important at Etsy, where we have more than 60 million items and hundreds of thousands of them can differ by style and aesthetic. At Etsy, we strive to understand the style preferences of our buyers in order to surface content that best fits their tastes.

Our chosen approach to encode the aesthetic aspects of an item is to label the item with one of a discrete set of “styles” of which “rustic”, “farmhouse”, and “boho” are examples. As manually labeling millions of listings with a style class is not feasible – especially in a marketplace that is ever changing, we wanted to implement a machine learning model that best predicts and captures listings’ styles. Furthermore, in order to serve style-inspired listings to our users, we leveraged the style predictor to develop a mechanism to forecast user style preferences.

Style Model Implementation

Merchandising experts identified style categories.

For this task, the style labels are one of the classes that have been identified by our merchandising experts. Our style model is a machine learning model which, when given a listing and its features (text and images), can output a style label. The style model was designed to not only output these discrete style labels but also a multidimensional vector representing the general style aspects of a listing. Unlike a discrete label (“naval”, “art-deco”, “inspirational”) which can only be one class, the style vector encodes how a listing can be represented by all these style classes in varying proportions. While the discrete style labels can be used in predictive tasks to recommend items to users from particular style classes (say filtering recommended listings to a user from just “art-deco”), the style vector is supposed to serve as a machine learning signal into our other recommendation models. For example, on a listing page on Etsy, we recommend similar items. This model can now surface items that are not only functionally the same (“couch” for another “couch”) but can potentially recommend items that are instead from the same style (“mid-century couch” for a “mid-century dining table”).

The first step in building our listing style prediction model was preparing a training data set. For this, we worked with Etsy’s in-house merchandising experts to identify a list of 43 style classes. We further leveraged search visit logs to construct a “ground truth” dataset of items using these style classes. For example, listings that get a click, add to cart or purchase event for the search query “boho” are assigned the “boho” class label. This gave us a large enough labeled dataset to train a style predictor model.

Style Deep Neural Network

Once we had a ground truth dataset, our task was to build a listing style predictor model that could classify any listing into one of 43 styles (it is actually 42 styles and a ‘everything else’ catch all). For this task, we used a two layer neural network to combine the image and text features in a non-linear fashion. The image features are extracted from the primary image of a listing using a retrained Resnet model. The text features are the TF-IDF values computed on the titles and tags of the items. The image and text vectors are then concatenated and fed as input into the neural network model. This neural network model learns non-linear relationships between text and image features that best predict a listings style. This Neural Network was trained on a GPU machine on Google Cloud and we experimented with the architecture and different learning parameters until we got the best validation / test accuracy.


By explicitly taking style into account the nearest neighbors are more style aligned

User Style

As described above, the style model helps us extract low-dimension embedding vectors that capture this stylistic information for a listing, using the penultimate layer of the neural network. We computed the style embedding vector using the style model for all the listings in Etsy’s corpus.

Given these listing style embeddings, we wanted to understand users’ long-term style preferences and represent it as a weighted average of 42 articulated style labels. For every user, subject to their privacy preferences, we first gathered the entire history of “purchased”, “favorited”, “clicked” and “add to cart” listings in the past three months. From all these listings that a user interacted with, we combined their corresponding style vectors to come up with a final style representation for each user (by averaging them).

Building Style-aware User Recommendations

There are different recommendation modules on Etsy, some of which are personalized for each user. We wanted to leverage user style embeddings in order to provide more personalized recommendations to our users. For recommendation modules, we have a two-stage system: we first generate a candidate set, which is a probable set of listings that are most relevant to a user. Then, we apply a personalized ranker to obtain a final personalized list of recommendations.  Recommendations may be provided at varying levels of personalization to a user based on a number of factors, including their privacy settings.

In this very first iteration of user style aware recommendations, we apply user style understanding to generate a candidate set based on user style embeddings and their latest interacted taxonomies. This candidate set is used for Our Picks For You module on the homepage. The idea is to combine the understanding of a user’s long time style preference with his/her recent interests in certain taxonomies.

This work can be broken down into three steps:

  • For each user, obtain top three styles and three latest taxonomies.

Given user style embeddings, we take top 3 styles with the highest probability to be the “predicted user style”. Latest taxonomies are useful because they indicate users’ recent interests and shopping missions.

  • For each (taxonomy, style) pair, generate 100 listings.

Given a taxonomy, sort all the listings in this taxonomy by the different style prediction scores for different classes, high to low. We take the top 100 listings out of these.

Minimal” listings in “Home & Living”

Floral” listings in “Home & Living”
  • For each user, remove invalid (taxonomy, style) pairs.  

Taxonomy, style validation is to check whether a style makes sense for a certain taxonomy. eg. Hygge is not a valid style for jewelry.

  • For each user, aggregate all listings generated by each valid style & taxonomy pair and take top 200 listings with the highest average purchase and favorite rate.

These become the style based recommendations for a user.

1-4: boho + bags_and_purses.backpacks
5-7: boho + weddings.clothing
8,13,16: minimal + bags_and_purses.backpacks

Style Analysis 

We were extremely interested to use our style model to answer questions about users sense of style. Our questions ranged from “How are style and taxonomy related? Do they have a lot in common?”, “Do users care about style while buying items?” to “How do style trends change across the year?”. Our style model enables us to answer at least some of these and helps us to better understand our users. In order to answer these questions and dig further we leveraged our style model and the generated embeddings to perform analysis of transaction data.

Next, we looked at the seasonality effect behind shopping of different styles on Etsy. We began by looking at unit sales and purchase rates of different styles across the year. We observed that most of our styles are definitely influenced by seasonality. For example, “Romantic” style peaks in February because of Valentines Day and “Inspirational” style peaks during graduation season. We tested the unit sales time series of different styles for statistical time series-stationarity test and found that the majority of the styles were non-stationary. This signifies that the majority of styles show different shopping trends throughout the year and don’t have constant unit sales throughout the year. This provided further evidence that users tastes show different trends across the year.




Using the style embeddings to study user purchase patterns not only provided us great evidence that users care about style, but also inspired us to further incorporate style into our machine learning products in the future.

Etsy is a marketplace for millions of unique and creative goods. Thus, our mission as machine learning practitioners is to build pathways that connect the curiosity of our buyers with the creativity of our sellers. Understanding both listing and user styles is another one of our novel building blocks to achieve this goal.

For further details into our work you can read our paper published in KDD 2019.

Authors: Aakash Sabharwal, Jingyuan (Julia) Zhou & Diane Hu


07:50

Saturday Morning Breakfast Cereal - Cinnamon Buns [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Apparently it's now possible to get a corn dog made with duck fat, presumably so you can lie to yourself about why you're eating a corndog.


Today's News:

Monday, 05 August

09:01

Saturday Morning Breakfast Cereal - Quantum [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Usually when I do a quantum computing joke, I feel the need to apologize to Scott Aaronson. For this particular one, I apologize to Seth Lloyd.


Today's News:

02:00

4 cool new projects to try in COPR for August 2019 [Fedora Magazine]

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.

Duc

Duc is a collection of tools for disk usage inspection and visualization. Duc uses an indexed database to store sizes of files on your system. Once the indexing is done, you can then quickly overview your disk usage either by its command-line interface or the GUI.

Installation instructions

The repo currently provides duc for EPEL 7, Fedora 29 and 30. To install duc, use these commands:

sudo dnf copr enable terrywang/duc 
sudo dnf install duc

MuseScore

MuseScore is a software for working with music notation. With MuseScore, you can create sheet music either by using a mouse, virtual keyboard or a MIDI controller. MuseScore can then play the created music or export it as a PDF, MIDI or MusicXML. Additionally, there’s an extensive database of sheet music created by Musescore users.

Installation instructions

The repo currently provides MuseScore for Fedora 29 and 30. To install MuseScore, use these commands:

sudo dnf copr enable jjames/MuseScore
sudo dnf install musescore

Dynamic Wallpaper Editor

Dynamic Wallpaper Editor is a tool for creating and editing a collection of wallpapers in GNOME that change in time. This can be done using XML files, however, Dynamic Wallpaper Editor makes this easy with its graphical interface, where you can simply add pictures, arrange them and set the duration of each picture and transitions between them.

Installation instructions

The repo currently provides dynamic-wallpaper-editor for Fedora 30 and Rawhide. To install dynamic-wallpaper-editor, use these commands:

sudo dnf copr enable atim/dynamic-wallpaper-editor
sudo dnf install dynamic-wallpaper-editor

Manuskript

Manuskript is a tool for writers and is aimed to make creating large writing projects easier. It serves as an editor for writing the text itself, as well as a tool for organizing notes about the story itself, characters of the story and individual plots.

Installation instructions

The repo currently provides Manuskript for Fedora 29, 30 and Rawhide. To install Manuskript, use these commands:

sudo dnf copr enable notsag/manuskript 
sudo dnf install manuskript

Sunday, 04 August

19:44

Terminating Service for 8Chan [The Cloudflare Blog]

The mass shootings in El Paso, Texas and Dayton, Ohio are horrific tragedies. In the case of the El Paso shooting, the suspected terrorist gunman appears to have been inspired by the forum website known as 8chan. Based on evidence we've seen, it appears that he posted a screed to the site immediately before beginning his terrifying attack on the El Paso Walmart killing 20 people.

Unfortunately, this is not an isolated incident. Nearly the same thing happened on 8chan before the terror attack in Christchurch, New Zealand. The El Paso shooter specifically referenced the Christchurch incident and appears to have been inspired by the largely unmoderated discussions on 8chan which glorified the previous massacre. In a separate tragedy, the suspected killer in the Poway, California synagogue shooting also posted a hate-filled “open letter” on 8chan. 8chan has repeatedly proven itself to be a cesspool of hate.

8chan is among the more than 19 million Internet properties that use Cloudflare's service. We just sent notice that we are terminating 8chan as a customer effective at midnight tonight Pacific Time. The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths. Even if 8chan may not have violated the letter of the law in refusing to moderate their hate-filled community, they have created an environment that revels in violating its spirit.

We do not take this decision lightly. Cloudflare is a network provider. In pursuit of our goal of helping build a better internet, we’ve considered it important to provide our security services broadly to make sure as many users as possible are secure, and thereby making cyberattacks less attractive — regardless of the content of those websites.  Many of our customers run platforms of their own on top of our network. If our policies are more conservative than theirs it effectively undercuts their ability to run their services and set their own policies. We reluctantly tolerate content that we find reprehensible, but we draw the line at platforms that have demonstrated they directly inspire tragic events and are lawless by design. 8chan has crossed that line. It will therefore no longer be allowed to use our services.

What Will Happen Next

Unfortunately, we have seen this situation before and so we have a good sense of what will play out. Almost exactly two years ago we made the determination to kick another disgusting site off Cloudflare's network: the Daily Stormer. That caused a brief interruption in the site's operations but they quickly came back online using a Cloudflare competitor. That competitor at the time promoted as a feature the fact that they didn't respond to legal process. Today, the Daily Stormer is still available and still disgusting. They have bragged that they have more readers than ever. They are no longer Cloudflare's problem, but they remain the Internet's problem.

I have little doubt we'll see the same happen with 8chan. While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online. It does nothing to address why mass shootings occur. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we've solved our own problem, but we haven't solved the Internet's.

In the two years since the Daily Stormer what we have done to try and solve the Internet’s deeper problem is engage with law enforcement and civil society organizations to try and find solutions. Among other things, that resulted in us cooperating around monitoring potential hate sites on our network and notifying law enforcement when there was content that contained an indication of potential violence. We will continue to work within the legal process to share information when we can to hopefully prevent horrific acts of violence. We believe this is our responsibility and, given Cloudflare's scale and reach, we are hopeful we will continue to make progress toward solving the deeper problem.

Rule of Law

We continue to feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often. Some have wrongly speculated this is due to some conception of the United States' First Amendment. That is incorrect. First, we are a private company and not bound by the First Amendment. Second, the vast majority of our customers, and more than 50% of our revenue, comes from outside the United States where the First Amendment and similarly libertarian freedom of speech protections do not apply. The only relevance of the First Amendment in this case and others is that it allows us to choose who we do and do not do business with; it does not obligate us to do business with everyone.

Instead our concern has centered around another much more universal idea: the Rule of Law. The Rule of Law requires policies be transparent and consistent. While it has been articulated as a framework for how governments ensure their legitimacy, we have used it as a touchstone when we think about our own policies.

We have been successful because we have a very effective technological solution that provides security, performance, and reliability in an affordable and easy-to-use way. As a result of that, a huge portion of the Internet now sits behind our network. 10% of the top million, 17% of the top 100,000, and 19% of the top 10,000 Internet properties use us today. 10% of the Fortune 1,000 are paying Cloudflare customers.

Cloudflare is not a government. While we've been successful as a company, that does not give us the political legitimacy to make determinations on what content is good and bad. Nor should it. Questions around content are real societal issues that need politically legitimate solutions. We will continue to engage with lawmakers around the world as they set the boundaries of what is acceptable in their countries through due process of law. And we will comply with those boundaries when and where they are set.

Europe, for example, has taken a lead in this area. As we've seen governments there attempt to address hate and terror content online, there is recognition that different obligations should be placed on companies that organize and promote content — like Facebook and YouTube — rather than those that are mere conduits for that content. Conduits, like Cloudflare, are not visible to users and therefore cannot be transparent and consistent about their policies.

The unresolved question is how should the law deal with platforms that ignore or actively thwart the Rule of Law? That's closer to the situation we have seen with the Daily Stormer and 8chan. They are lawless platforms. In cases like these, where platforms have been designed to be lawless and unmoderated, and where the platforms have demonstrated their ability to cause real harm, the law may need additional remedies. We and other technology companies need to work with policy makers in order to help them understand the problem and define these remedies. And, in some cases, it may mean moving enforcement mechanisms further down the technical stack.

Our Obligation

Cloudflare's mission is to help build a better Internet. At some level firing 8chan as a customer is easy. They are uniquely lawless and that lawlessness has contributed to multiple horrific tragedies. Enough is enough.

What's hard is defining the policy that we can enforce transparently and consistently going forward. We, and other technology companies like us that enable the great parts of the Internet, have an obligation to help propose solutions to deal with the parts we're not proud of. That's our obligation and we're committed to it.

Unfortunately the action we take today won’t fix hate online. It will almost certainly not even remove 8chan from the Internet. But it is the right thing to do. Hate online is a real issue. Here are some organizations that have active work to help address it:

Our whole Cloudflare team’s thoughts are with the families grieving in El Paso, Texas and Dayton, Ohio this evening.

07:58

Saturday Morning Breakfast Cereal - You [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I mean, if these books actually worked, wouldn't everybody only buy 1?


Today's News:

Saturday, 03 August

18:00

Disappearing Sunday Update [xkcd.com]



You can read the chapter list and introduction to How To at blog.xkcd.com and learn more at xkcd.com/how-to.

10:36

Saturday Morning Breakfast Cereal - Preference [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
'You dunce' really should be a more common feature of casual conversation.


Today's News:

Friday, 02 August

13:15

How Etsy Handles Peeking in A/B Testing [Code as Craft]

Etsy relies heavily on experimentation to improve our decision-making process. We leverage our internal A/B testing tool when we launch new features, polish the look and feel of our site, or even make changes to our search and recommendation algorithms. For years, Etsy has prided ourselves on our culture of continuous experimentation. However, as our experimentation platform scales and the velocity of experimentation increases rapidly across the company, we also face a number of new challenges. In this post, we investigate one of these challenges: how to peek at experimental results early in order to increase the velocity of our decision-making without sacrificing the integrity of our results.

The Peeking Problem

In A/B testing, we’re looking to determine if a metric we care about (i.e. percentage of visitors who make a purchase) is different between the control and treatment groups. But when we detect a change in the metric, how do we know if it is real or due to random chance? We can look at the p-value of our statistical test, which indicates the probability we would see the detected difference between groups assuming there is no true difference. When the p-value falls below the significance level threshold we say that the result is statistically significant and we reject the hypothesis that the control and treatment are the same.

So we can just stop the experiment when the hypothesis test for the metric we care about has a p-value of less than 0.05, right? Wrong. In order to draw the strongest conclusions from the p-value in the context of an A/B test, we have to have fixed the sample size of an experiment in advance, and to only make a decision on the p-value once. Peeking at data regularly and stopping an experiment as soon as the p-value dips below 0.05 increases the rate of Type I errors, or false positives, because the false positive of each test compounds increasing the overall probability that you’ll see a false result.

Let’s look at an example to gain a more concrete view of the problem. Suppose we run an experiment where there is no true change between the control and experimental variant and both have a baseline target metric of 50%. If we are using a significance level of 0.1 and there is no peeking, in other words, the sample size needed before a decision is made is determined in advance, then the rate of false positives is 10%. However, if we do peek and we check the significance level at every observation, then after 500 observations, there is over a 50% chance of incorrectly stating that treatment is different than the control (Figure 1).

Figure 1: Chances for accepting that A and B are different, with A and B both converting at 50%.

At this point, you might already have figured that the simplest way to solve the problem would be to fix a sample size in advance and run an experiment until the end before checking the significance level. However, this requires strictly enforced separation between the design and analysis of experiments which can have large repercussions throughout the experimental process.  In early stages of an experiment, we may miss a bug in the set up or with the feature being tested that will invalidate our results later. If we don’t catch these early, it slows down our experimental process unnecessarily, leaving less time for iterations and real site changes. Another issue involved in set up is that it can be difficult to predict the effect size product teams would like to obtain prior to the experiment, which can make it hard to optimize the sample size in advance.  Even assuming we set up our experiment perfectly, there are down the line implications. If an experiment is impacting a metric in a negative way, we want to be aware as soon as possible so we don’t negatively affect our users’ experience. These considerations become even more pronounced when we’re running an experiment on a small population, or in a less trafficked part of the site and it can take months to reach the target sample size.  Across teams, we want to be able to iterate quickly without sacrificing the integrity of our results.

With this in mind, we need to come up with statistical methodology that will give reliable inference while still providing product teams the ability to continuously monitor experiments, especially for our long-running experiments. At Etsy, we tackle this challenge from two sides, user interface and statistical procedures. We made a few user interface changes to our A/B testing tool to prevent our stakeholders from drawing false conclusions, and we implemented a flexible p-value stopping-point in our platform, which takes inspiration from the sequential testing concept in statistics.

It is worth noting that the peeking problem has been studied by many, including industry veterans1, 2, developers of large-scale commercial A/B testing platforms3, 4 and academic researchers5. Moreover, it is hardly a challenge exclusive to A/B testing on the web. The peeking problem has troubled the medical field for a long time; for example, medical scientists could peek at the results and stop a clinical trial early because of initial positive results, leading to flawed interpretations of the data6, 7.

Our Approach

In this section, we dive into the approach that we have designed and adapted to address the peeking problem: transitioning from traditional, fixed-horizon testing to sequential testing, and preventing peeking behaviors through user interface changes.

Sequential Testing with Difference in Converting Visits

Sequential testing, which has been widely used in clinical trials8, 9 and gained recent popularity for web experimentation10 , guarantees that if we end the test when the p-value is below a predefined threshold α , the false positive rate will be no more than α. It does so by computing the probabilities of false-positives at each potential stopping point using dynamic programming, assuming that our test statistic is normally distributed. Since we can compute these probabilities, we can then adjust the test’s p-value threshold, which in turn changes the false-positive chance, at every step so that the total false positive rate is below the threshold that we desire. Therefore, sequential testing enables concluding experiments as soon as the data justifies it, while also keeping our false positive rate in check.

We investigated a few methods including O’Brien-Fleming, Pocock and sequential testing using difference in successful observations. We ultimately settled on the last approach. Using the difference in successful observations, we look at the raw difference in converting visits and stop an experiment when this difference becomes large enough.  The difference threshold is only valid until we reach a total number of converted visits. This method is good for detecting small changes and does so quickly, which makes it most suitable for our needs. Nevertheless, we did consider some cons this method presented as well. Traditional power and significance calculations use proportion of successes whereas looking at difference in converted visits does not take into account total population size.  Because of this, we are more likely to reach the total number of converted visits before we see a large enough difference in converted visits with high baselines target metrics. This means we are more likely to miss a true change in these cases. Furthermore, it requires extra set up when an experiment is not evenly split across variants. We chose to use this method with a few adjustments for these shortcomings so we could increase our speed of detecting real changes between experimental groups.

Our implementation of this method is influenced by the approach Evan Miller described here. This method sets a threshold for difference between the control and treatment converted visits based on minimal detected effect and target false positive and negative rates.  If the experiment reaches or passes the threshold, we allow the experiment to end early. If this difference is not reached, we assess our results using the standard approach of a power analysis.  The combination of these methods creates a continuous p-value threshold for which we can safely stop an experiment when the p-value is under the curve. This threshold is lower near the beginning of an experiment and converges to our significance level as the experiment reaches our targeted power. This allows us to detect changes quicker with low baselines while not missing smaller changes for experiments with high baseline target metrics.

Figure 2: Example of a p-value threshold curve.

To validate this approach, we tested it on results from experimental simulations with various baselines and effect sizes using mock experimental conditions. Before implementing, we wanted to understand:

  1. What effect will this have on false positive rates?
  2. What effect does early stopping have on reported effect size and confidence intervals?
  3. How much faster will we get a signal for experiments with true changes between groups?

We found that when using a p-value curve tuned for a 5% false positive rate, our early stopping threshold does not materially increase the false positive rate and we can be confident of a directional change.  

One of the downfalls with stopping experiments early, however, is that with an effect size under ~5%, we tend to overestimate the impact and widen the confidence interval.  To accurately attribute increases in metrics to experimental wins, we developed a haircut formula to apply to the effect size in metrics for experiments that we decide to end early.  Furthermore, we offset some of these by setting a standard of running experiments for at least 7 days to account for different weekend and weekday trends.

Figure 3: Reported Vs. True Effect Size

We tested this method with a series of simulations and saw that for experiments which would take 3 weeks to run assuming a standard power analysis, we could save at least a week in most cases where there was a real change between variants.  This helped us feel confident that even with a slight overestimation of effect size, it was worth the time savings for teams with low baselines target metrics who typically struggle with long experimental run times.

Figure 4: Day Savings From Sequential Testing

UI Improvements

In our experimental testing tool, we wanted stakeholders to have access to metrics and calculations we measure throughout the duration of the experiment. In additional to the p-value, we care about power and confidence interval.  First, power.  Teams at Etsy have to often coordinate experiments on the same page so it is important for teams to have an idea of how long an experiment will have to run assuming no early stopping. We do this by running an experiment until we reach a set power.

Second, Confidence interval (CI), is the range of values that are a good estimate of the true value in which we are confident a particular metric falls. In the context of A/B testing for example, if we ran the experiment millions of times, 90% of the time the true value of some effect size would fall within the 90% CI. There are three things that we care most about in relation to the confidence interval of an effect in an experiment:

  1. Whether the CI includes zero, because this maps exactly to the decision we would make with the p-value; if the 90% CI includes zero, then the p-value is greater than 0.1. Conversely, if it doesn’t include zero, then the p-value is less than 0.1;
  2. The smaller the CI, the better estimate of the parameter we have;
  3. The farther away from zero the CI is, the more confident we can be that there is a true difference.

Previously in our A/B testing tool UI, we displayed statistical data as shown in the table below on the left. The “observed” column indicates results for the control and there is a “% Change” column for each treatment variant. When hovering over a number in the “% Change” column, a popover table appears, showing the observed and actual effect size, confidence level, p-value, and number of days we could expect to have enough data to power the experiment based on our expected effect size. 

Figure 5: User interface before changes.

However, always displaying numerical results in the “% Change” column could lead to stakeholders peeking at data and making an incorrect inference about the success of the experiment. Therefore, we added a row in the hover table to show the power of the test (assuming some fixed effect size), and made the following changes to our user interface:

  1. Show a visualization of the C.I. and color the bar red when the C.I. is entirely negative to indicate a significant decrease, green when the C.I. is entirely positive to indicate a significant increase, and grey when the C.I. spans 0.
  2. Display different messages in the “% Change” column and hover table to indicate different stages the experiment metric is currently in, depending on its power, p-value and calculated flexible p-value threshold. In the “% Change” column, possible messages include “Waiting on data”, “Not enough data”, “No change” and “+/- X %” (to show significant increase/ decrease). In the hover table, possible headers include “metric is not powered”, “there is no detectable change”, “we’re confident we detected a change”, and “directional change is correct but magnitude might be inflated” when early stopping is reached but the metric is not powered yet.   

Figure 6: User interface after changes.

Even after making these UI changes, making a decision on when to stop an experiment and whether or not to launch it is not always simple. Generally some things we advise our stakeholders to consider are:

  1. Do we have statistically significant results that support our hypothesis?
  2. Do we have statistically significant results that are positive but aren’t what we anticipated?
  3. If we don’t have enough data yet, can we just keep it running or is it blocking other experiments?
  4. Is there anything broken in the product experience that we want to correct, even if the metrics don’t show anything negative?
  5. If we have enough information on the main metrics overall, do we have enough information to iterate? For example, if we want to look at impact on a particular segment, which could be 50% of the traffic, then we’ll need to run the experiment twice as long as we had to in order to look at the overall impact.

We hope that these UI changes will help our stakeholders make better informed decisions while still letting them uncover cases where they have changed something more dramatically than expected and thus can stop the experiment sooner.

Further Discussion

In this section, we discuss a few more issues we examined while designing Etsy’s solutions to peeking.

Trade-off Between Power and Significance

There is a trade-off between Type I (false positive) and Type II (false negative) errors – if we decrease the probability of one of the errors, the probability of the other will increase – for a more detailed explanation, please see this short post. This translates into a trade-off between p-value and power because if we require stronger evidence to reject the null hypothesis (i.e.  a smaller p-value threshold), then there is a smaller chance that we will be able to correctly reject a false null hypothesis a.k.a decreased power. The different messages we display on the user interface balance this issue to some degree. At the end, it is just a choice that we have to make based on our priorities and focus in experimentation.

Weekend vs. Weekday Data Sample Size

At Etsy, the volume of traffic and intent of visitors varies from weekdays to weekends. This is not a concern for the sequential testing approach that we ultimately chose. However, it would be an issue for some other methods that require equal daily data sample size. During our research, we looked into ways to handle the inconsistency in our daily data sample size. We found that the GroupSeq package in R, which enables the construction of group sequential designs and has various alpha spending functions available to choose among, is a good way to account for this.

Other Types of Designs

The sequential sampling method that we have designed is a straightforward form of a stopping rule modified to best suit our needs and circumstances. However, there are other types of sequential approaches that are more formally defined, such as the Sequential Probability Ratio Test (SPRT), which is utilized by Optimizely’s New Stats Engine4, and the Sequential Generalized Likelihood Ratio test, which has been used in clinical trials11. There has also been debate in both academic and industry about the effectiveness of Bayesian A/B testing in solving the peeking problem2, 5. It is indeed a very interesting problem!

Final Thoughts

Accurate interpretation of statistical data is crucial in making informed decisions about product development. When online experiments have to be run efficiently to save time and cost, we inevitably run into dilemmas unique to our context, and peeking is just one of them. In researching and designing solutions to this problem, we examined some more rigorous theoretical work. However, the characteristics and priorities in online experimentation makes the application of it difficult. Our approach outlined in this post, even though simple, addresses the root cause of the peeking problem effectively. Looking forward, we think the balance between statistical rigorousness and practical constraints is what makes online experimentation intriguing and fun to work on, and we at Etsy are very excited about tackling more interesting problems awaiting us.

This work is a collaboration between Callie McRee and Kelly Shen from the Analytics and Analytics Engineering teams. We would like to thank Gerald van den Berg, Emily Robinson, Evan D’Agostini, Anastasia Erbe, Mossab Alsadig, Lushi Li, Allison McKnight, Alexandra Pappas, David Schott and Robert Xu for helpful discussions and feedback.

References

  1. How Not to Run an A/B Test by Evan Miller
  2.  Is Bayesian A/B Testing Immune to Peeking? Not Exactly by David Robinson
  3.  Peeking at A/B tests: why it matters, and what to do about it by Johari et al., KDD’17
  4.  The New Stats Engine by Pekelis, et al., Optimizely
  5.  Continuous monitoring of A/B tests without pain: optional stopping in Bayesian testing by Deng, Lu, et al., CEUR’17
  6.  Trial sans Error: How Pharma-Funded Research Cherry-Picks Positive Results by Ben Goldacre of Scientific American, February 13, 2013
  7.  False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant by Simmons, Simonsohn, et al. (2011), Psychological Science, 22
  8. Interim Analyses and Sequential Testing in Clinical Trials by Nicole Solomon, BIOS 790, Duke University
  9. A Pocock approach to sequential meta-analysis of clinical trials by Shuster, J. J., & Neu, J. (2013), Research Synthesis Methods, 4(3), 10.1002/jrsm.1088
  10.  Simple Sequential A/B Testing by Evan Miller
  11.  Sequential Generalized Likelihood Ratio Tests for Vaccine Safety Evaluation by Shih, M.-C., Lai, T. L., Heyse, J. F. and Chen, J. (2010), Statistics in Medicine, 29: 2698-2708

11:00

Saturday Morning Breakfast Cereal - Flour [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you point out how stupid this joke is to me, it'll lower total human happiness.


Today's News:

02:00

Use Postfix to get email from your Fedora system [Fedora Magazine]

Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent (MTA) isn’t properly configured, you might not be getting the notifications. Postfix is a MTA that’s easy to configure and known for a strong security record. Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.

Install packages

Use dnf to install the required packages (you configured sudo, right?):

$ sudo -i
# dnf install postfix mailx

If you previously had a different MTA configured, you may need to set Postfix to be the system default. Use the alternatives command to set your system default MTA:

$ sudo alternatives --config mta
There are 2 programs which provide 'mta'.
  Selection    Command
*+ 1           /usr/sbin/sendmail.sendmail
   2           /usr/sbin/sendmail.postfix
Enter to keep the current selection[+], or type selection number: 2

Create a password_maps file

You will need to create a Postfix lookup table entry containing the email address and password of the account that you want to use to for sending email:

# MY_EMAIL_ADDRESS=glb@gmail.com
# MY_EMAIL_PASSWORD=abcdefghijklmnop
# MY_SMTP_SERVER=smtp.gmail.com
# MY_SMTP_SERVER_PORT=587
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
# chmod 600 /etc/postfix/password_maps
# unset MY_EMAIL_PASSWORD
# history -c

If you are using a Gmail account, you’ll need to configure an “app password” for Postfix, rather than using your gmail password. See “Sign in using App Passwords” for instructions on configuring an app password.

Next, you must run the postmap command against the Postfix lookup table to create or update the hashed version of the file that Postfix actually uses:

# postmap /etc/postfix/password_maps

The hashed version will have the same file name but it will be suffixed with .db.

Update the main.cf file

Update Postfix’s main.cf configuration file to reference the Postfix lookup table you just created. Edit the file and add these lines.

relayhost = [smtp.gmail.com]:587
smtp_tls_security_level = verify
smtp_tls_mandatory_ciphers = high
smtp_tls_verify_cert_match = hostname
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/password_maps

The example assumes you’re using Gmail for the relayhost setting, but you can substitute the correct hostname and port for the mail host to which your system should hand off mail for sending.

For the most up-to-date details about the above configuration options, see the man page:

$ man postconf.5

Enable, start, and test Postfix

After you have updated the main.cf file, enable and start the Postfix service:

# systemctl enable --now postfix.service

You can then exit your sudo session as root using the exit command or Ctrl+D. You should now be able to test your configuration with the mail command:

$ echo 'It worked!' | mail -s "Test: $(date)" glb@gmail.com

Update services

If you have services like logwatch, mdadm, fail2ban, apcupsd or certwatch installed, you can now update their configurations so that their email notifications will go to your internet email address.

Optionally, you may want to configure all email that is sent to your local system’s root account to go to your internet email address. Add this line to the /etc/aliases file on your system (you’ll need to use sudo to edit this file, or switch to the root account first):

root: glb+root@gmail.com

Now run this command to re-read the aliases:

# newaliases
  • TIP: If you are using Gmail, you can add an alpha-numeric mark between your username and the @ symbol as demonstrated above to make it easier to identify and filter the email that you will receive from your computer(s).

Troubleshooting

View the mail queue:

$ mailq

Clear all email from the queues:

# postsuper -d ALL

Filter the configuration settings for interesting values:

$ postconf | grep "^relayhost\|^smtp_"

View the postfix/smtp logs:

$ journalctl --no-pager -t postfix/smtp

Reload postfix after making configuration changes:

$ systemctl reload postfix

Photo by Sharon McCutcheon on Unsplash.

Thursday, 01 August

09:12

Saturday Morning Breakfast Cereal - Tundra [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
One day, we will clone the Dodo and reintroduce it to an underwater island.


Today's News:

Hey geeks, if you buy the new book via Barnes and Noble, you can get a free digital comic describe how the project started!

Wednesday, 31 July

14:10

Why I’m Helping Cloudflare Grow in Australia & New Zealand (A/NZ) [The Cloudflare Blog]

Why I’m Helping Cloudflare Grow in Australia & New Zealand (A/NZ)
Why I’m Helping Cloudflare Grow in Australia & New Zealand (A/NZ)

I’ve recently joined Cloudflare as Head of Australia and New Zealand (A/NZ). This is an important time for the company as we continue to grow our presence locally to address the demand in A/NZ, recruit local talent, and build on the successes we’ve had in our other offices around the globe. In this new role, I’m eager to grow our brand recognition in A/NZ and optimise our reach to customers by building up my team and channel presence.

A little about me

I’m a Melburnian born and bred (most livable city in the world!) with more than 20 years of experience in our market. From guiding strategy and architecture of the region’s largest resources company, BHP, to building and running teams and channels, and helping customers solve the technical challenges of their time, I have been in, or led, businesses in the A/NZ Enterprise market, with a focus on network and security for the last six years.

Why Cloudflare?

I joined Cloudflare because I strongly believe in its mission to help build a better Internet, and believe this mission, paired with its massive global network, will enable the company to continue to deliver incredibly innovative solutions to customers of all segments.

Four years ago, I was lucky to build and lead the VMware Network & Security business, working with some of Cloudflare’s biggest A/NZ customers. I was confronted with the full extent of the security challenges that A/NZ businesses face. I recognized that there must be a better way to help customers secure their local and multi-cloud environments. That's how I found Cloudflare. With Cloudflare's Global Cloud Platform, businesses have an integrated solution that offers the best in security, performance and reliability.

Second, something that’s personally important for me as the son of Italian migrants, and now a dad of two gorgeous daughters, is that Cloudflare is serious about culture and diversity. When I was considering joining Cloudflare, I watched videos from the Internet Summit, an annual event that Cloudflare hosts in its San Francisco office. One thing that really stood out to me was that the speakers came from so many different backgrounds.

I’m extremely passionate about encouraging those from all walks of life to pursue opportunities in business and tech, so seeing the diversity of people giving insightful talks made me realise that this was a company I wanted to work for, and hopefully perhaps my girls as well (no pressure).

Cloudflare A/NZ

I strongly believe that Cloudflare’s mission, paired with its massive global network, will enable customers of all sizes in segments in Australia and New Zealand to leverage Cloudflare’s security, performance and reliability solutions.

For example, VicRoads is 85 percent faster now that they are using Argo Smart Routing, Ansarada uses Cloudflare’s WAF to protect against malicious activity, and MyAffiliates harnesses Cloudflare’s global network, which spans more than 180 cities in 80 countries, to ensure an interruption-free service for its customers.

Making security and speed, which are necessary for any strong business, available to anyone with an Internet property is truly a noble goal. That’s another one of the reasons I’m most excited to work at Cloudflare.

Australians and Kiwis alike have always been great innovators and users of technology. However, being so physically isolated (Perth is the most isolated city in the world and A/NZ are far from pretty much everywhere else in the world) has limited our ability to have the diversity of choice and competition. Our isolation from said choice and competition fueled innovation, but at the price of complexity, cost, and ease. This makes having local servers absolutely vital for good performance. With Cloudflare’s expansive network, 98 percent of the Internet-connected developed world is located within 100 milliseconds of our network. In fact, Cloudflare already has data centers in Auckland, Brisbane, Melbourne, Perth, and Sydney, ensuring that customers in A/NZ have access to a secure, fast, and reliable Internet.

Our opportunities in Australia, New Zealand and beyond...

I’m truly looking forward to helping Cloudflare grow its reach over the next five years. If you are a business in Australia and New Zealand and have a cyber-security, performance or reliability need, get in touch with us (1300 748 959). We’d love to explore how we can help.

If you’re interested in exploring careers at Cloudflare, we are hiring globally. Our team in Australia is small today, about a dozen, and we are growing quickly. We have open roles in Solutions Engineering and Business Development Representatives. Check out our careers page to learn more, or send me a note.

09:15

Saturday Morning Breakfast Cereal - Critical [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Sometimes you come up with a full idea, and sometimes you start from a pun and work backward.


Today's News:

Thanks, geeks!


09:10

Saturday Morning Breakfast Cereal - Jurassic [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I am prepared to offer my services to the writing of Jurassic Park Part 17.


Today's News:

07:31

An Introduction to Structured Data at Etsy [Code as Craft]

Etsy has an uncontrolled inventory; unlike many marketplaces, we offer an unlimited array of one-of-a-kind items, rather than a defined set of uniform goods. Etsy sellers are free to list any policy-compliant item that falls within the three broad buckets of craft supplies, handmade, and vintage. Our lack of standardization, of course, is what makes Etsy special, but it also makes learning about our inventory challenging. That’s where structured data comes in.

Structured vs. Unstructured Data

Structured data is data that exists in a defined relationship to other data. The relation can be articulated through a tree, graph, hierarchy, or other standardized schema and vocabulary. Conversely, unstructured data does not exist within a standardized framework and has no formal relationship to other data in a given space.

For the purposes of structured data at Etsy, the data are the product listings, and they are structured according to our conception of where in the marketplace they belong. That understanding is expressed through the taxonomy.

Etsy’s taxonomy is a collection of hierarchies comprised of 6,000+ categories (ex. Boots), 400+ attributes (ex. Women’s shoe size), 3,500+ values (ex. 7.5), and 90+ scales (ex. US/Canada). These hierarchies form the foundation of 3,500+ filters and countless category-specific shopping experiences on the site. The taxonomy imposes a more controlled view of the uncontrolled inventory — one that engineers can use to help buyers find what they are looking for. 

Building the Taxonomy

The Etsy taxonomy is represented in JSON files, with each category’s JSON containing information about its place in the hierarchy and the attributes, values, and scales for items in that category. Together, these determine what questions will be asked of the seller for listings in that category (Figure A, Box 1), and what filters will be shown to buyers for searches in that category (Figure A, Box 2).

Figure A 
A snippet of the JSON representation of the Jewelry > Rings > Bands category

The taxonomists at Etsy are able to alter the taxonomy hierarchies using an internal tool. This tool supports some unique behaviors of our taxonomy, like inheritance. This means that if a category has a particular filter, then all of its subcategories will inherit that filter as well.

Figure B
Sections of the Jewelry > Rings > Bands category as it appears in our internal taxonomy tool 

Gathering Structured Data: The Seller Perspective

One of the primary ways that we currently collect structured data is through the listing creation process, since that is our best opportunity to learn about each listing from the person who is most familiar with it: the seller!

Sellers create new listings using the Shop Manager. The first step in the listing process is to choose a category for the listing from within the taxonomy. Using auto-complete suggestions, sellers can select the most appropriate category from all of the categories available. 

Figure C 
Category suggestions for “ring”

At this stage in the listing creation process, optional attribute fields appear in the Shop Manager. This is also enabled by the taxonomy JSON, in that the fields correspond with the category selected by the seller (see Figure A, Box 1). This behavior ensures that we are only collecting relevant attribute data for each category and simplifies the process for sellers. Promoting this use of standardized data also reduces the need for overloaded listing titles and descriptions by giving sellers a designated space to tell buyers about the details of their products. Data collected during the listing creation process appears on the listing page, highlighting for the buyer some of the key, standardized details of the listing.

Figure D
Some of the attribute fields that appear for listings in Jewelry > Rings > Bands (see Figure A, Box 1 for the JSON that powers the Occasion attribute)

Making Use of Structured Data: The Buyer Perspective

Much of the buyer experience is a product of the structured data that has been provided by our sellers. For instance, a given Etsy search yields category-specific filters on the left-hand navigation of the search results page. 

Figure E
Some of the filters that appear upon searching for “Rings”

Those filters should look familiar! (see Figure D) They are functions of the taxonomy. The search query gets classified to a taxonomy category through a big data job, and filters affiliated with that category are displayed to the user (see Figure F below). These filters allow the buyer to narrow down their search more easily and make sense of the listings displayed.

Figure F
The code that displays category-specific filters upon checking that the classified category has buyer filters defined in its JSON (see Figure A, Box 2 for a sample filter JSON)

Structuring Unstructured Data

There are countless ways of deriving structured data that go beyond seller input. First, there are ways of converting unstructured data that has already been provided, like listing titles or descriptions, into structured data. Also, we can use machine learning to learn about our listings and reduce our dependence on seller input. We can, for example, learn about the color of a listing through the image provided; we can also infer more nuanced data about a listing, like its seasonality or occasion.

We can continue to measure the relevance of our structured data through metrics like the depth of our inventory categorization within our taxonomy hierarchies and the completeness of our inventory’s attribution.

All of these efforts allow us to continue to build deeper category-specific shopping experiences powered by structured data. By investing in better understanding our inventory, we create deeper connections between our sellers and our buyers.

02:00

Multi-monitor wallpapers with Hydrapaper [Fedora Magazine]

When using multiple monitors, by default, means that your desktop wallpaper is duplicated across all of your screens. However, with all that screen real-estate that a multiple monitor setup delivers, having a different wallpaper for each monitor is a nice way to brighten up your workspace even more.

One manual workaround for getting different wallpapers on multiple monitors is to manually create it using something like the GIMP, cropping and positioning your backgrounds by hand. There is, however, a neat wallpaper manager called Hydrapaper that makes setting multiple wallpapers a breeze.

Hydrapaper

Hydrapaper is a simple GNOME application that auto-detects your monitors, and allows you to choose different wallpapers for each display. In the background, it achieves this by simply composing a new background image from your choices that fits your displays, and sets that as your new wallpaper. All with a single click.

Hydrapaper lets the user define multiple source directories to choose wallpapers from, and also has an option to select random wallpapers from the source directories. Finally, it also allows you to specify your favourite images, and provides an additional category for favourites. This is especially useful for users that have a lot of wallpapers and change them frequently.

Installing Hydrapaper on Fedora Workstation

Hydrapaper is available to install from the 3rd party Flathub repositories. If you have never installed an application from Flathub before, set it up using the following guide:

Install Flathub apps on Fedora

After correctly setting up Flathub as a software source, you will be able to search for and install Hydrapaper via GNOME Software.