Monday, 14 October

13:30

A Code Glitch May Have Caused Errors In More Than 100 Published Studies [Slashdot]

Scientists have uncovered a glitch in a piece of code that could have yielded incorrect results in over 100 published studies that cited the original paper. From a report: The glitch caused results of a common chemistry computation to vary depending on the operating system used, causing discrepancies among Mac, Windows, and Linux systems. The researchers published the revelation and a debugged version of the script, which amounts to roughly 1,000 lines of code, last week in the journal Organic Letters. "This simple glitch in the original script calls into question the conclusions of a significant number of papers on a wide range of topics in a way that cannot be easily resolved from published information because the operating system is rarely mentioned," the new paper reads. "Authors who used these scripts should certainly double-check their results and any relevant conclusions using the modified scripts in the [supplementary information]." Yuheng Luo, a graduate student at the University of Hawai'i at Manoa, discovered the glitch this summer when he was verifying the results of research conducted by chemistry professor Philip Williams on cyanobacteria. The aim of the project was to "try to find compounds that are effective against cancer," Williams said.

Read more of this story at Slashdot.

13:12

Pitney Bowes: Can we be frank? Ransomware has borked our dead-tree post systems [The Register]

Venerable stamp-machine maker stalled by server infection

Pitney Bowes, the US stamping meter maker, has been infected with ransomware, leaving customers unable to top-up their equipment with credit nor access the corporate web store.…

12:53

India's Reliance Jio Unveils Video Call Assistant To Help Businesses Automate Customer Support [Slashdot]

Before Google moves to bring its human-sounding robot calling service Duplex to help users automate their interactions with businesses to international markets, an Indian giant is deploying its own solution to get a jumpstart on the local market. From a report: Reliance Jio today unveiled AI-powered Video Call Assistant service that will allow businesses to automate their customer support and other communications. The service, built in collaboration with Radisys, a U.S.-based subsidiary of Reliance Industries, can be accessed via a 4G phone call and does not require installation of any additional app, Jio said. Executives of Reliance Jio demonstrated the technology on Monday at the third installment of Indian Mobile Congress, similar to but not affiliated with the trade show Mobile World Congress. They said they have already courted a number of customers for this service, including HDFC Bank. In the demo, a user dials a regular phone number and sees a video chat option. Once tapped, the user is greeted by a pre-recorded video message from a human. To demonstrate the AI's capabilities, an executive of Reliance Jio asked the bot what was the interest rate on personal loans. The human-looking bot was able to answer the question without any delay. The company, which became the largest telecom operator in India in three years, is also offering audio and text bot options to brands, executives said. "It may be a large business or small, our bot service is built for all," one of the two executives said.

Read more of this story at Slashdot.

12:23

Debian 11 To Further Deprecate IPTables In Favor Of Nftables Plus Promoting Firewalld [Phoronix]

Debian 10 "Buster" already is making use of IPTables' Netfilter back-end by default in their path to deprecate IPTables while for Debian 11 the deprecation will continue further...

12:10

Uber Lays Off Another 350 Employees Across Eats, Self-driving and Other Departments [Slashdot]

Uber has just laid off around 350 employees across a variety of teams within the organization, marking what the company says is its third and final phase of layoffs of the process it began earlier this year, Uber CEO Dara Khosrowshahi said to employees today in an email. From a report: Those affected include employees from Eats, performance marketing, Advanced Technologies Group, recruiting, as well as various teams within the global rides and platform departments. Some employees have also been asked to relocate. "Days like today are tough for us all, and the ELT and I will do everything we can to make certain that we won't need or have another day like this ahead of us," Khosrowshahi wrote in the email. "We all have to play a part by establishing a new normal in how we work: identifying and eliminating duplicate work, upholding high standards for performance, giving direct feedback and taking action when expectations aren't being met, and eliminating the bureaucracy that tends to creep as companies grow." In total, the layoffs represent about 1% of the company, an Uber spokesperson told TechCrunch. Further reading: Uber Posts $5.2 Billion Loss and Slowest Ever Growth Rate (August 2019).

Read more of this story at Slashdot.

12:08

Google unplugs AMP, hooks it into OpenJS Foundation after critics turn up the volume [The Register]

You want this web tech to be independent? Sure, we'll just put it in an org we bankroll

Google's AMP project will join the incubation program of the OpenJS Foundation, which is part of the Linux Foundation.…

11:30

Inside Mark Zuckerberg's Private Meetings With Conservative Pundits [Slashdot]

Facebook CEO Mark Zuckerberg has been hosting informal talks and small, off-the-record dinners with conservative journalists, commentators and at least one Republican lawmaker in recent months to discuss issues like free speech and discuss partnerships, Politico reported on Monday. From the report: The dinners, which began in July, are part of Zuckerberg's broader effort to cultivate friends on the right amid outrage by President Donald Trump and his allies over alleged "bias" against conservatives at Facebook and other major social media companies. "I'm under no illusions that he's a conservative but I think he does care about some of our concerns," said one person familiar with the gatherings, which multiple sources have confirmed. News of the outreach is likely to further fuel suspicions on the left that Zuckerberg is trying to appease the White House and stay out of Trump's crosshairs. The president threatened to sue Facebook and Google in June and has in the past pressured the Justice Department to take action against his perceived foes. "The discussion in Silicon Valley is that Zuckerberg is very concerned about the Justice Department, under Bill Barr, bringing an enforcement action to break up the company," said one cybersecurity researcher and former government official based in Silicon Valley. "So the fear is that Zuckerberg is trying to appease the Trump administration by not cracking down on right-wing propaganda."

Read more of this story at Slashdot.

11:19

Sony Pushes More AMD Jaguar Optimizations To Upstream LLVM 10 Compiler [Phoronix]

Sony engineers working on the PlayStation compiler toolchain continue upstreaming various improvements to the LLVM source tree for helping the AMD APUs powering their latest game console...

10:57

Thoma Bravo To Buy Sophos For $3.9 Billion [Slashdot]

Private equity firm Thoma Bravo said today it plans to buy UK-based cyber-security giant Sophos for $7.40 per share, for a total value of $3.9 billion, both companies announced today. From a report: The sale price represents a 37% premium on the Sophos market trading price, as recorded on Friday, at the end of the trading. The Sophos board of directors said they plan to "unanimously recommend" the acquisition offer to their shareholders. Before today's announcement, Thoma Bravo acquired a minority stake in McAfee last year and was rumored to be interested in buying the whole company. It is unclear how today's Sophos acquisition will impact plans to buy McAfee, but the two companies -- Sophos and McAfee -- are classic rivals on the cyber-security market and share a product portfolio, so the door seems to have closed on the McAfee deal.

Read more of this story at Slashdot.

10:29

How do we stop filling the oceans with Lego? By being a BaaS-tard, toy maker suggests [The Register]

Firm admits it has considered a bricks-as-a-service biz model

Beloved brick maker Lego is considering a rental service as part of a drive to improve sustainability in a world where hatred of plastic is threatening their attractiveness as a toy.…

10:22

Google USB-C Titan Security Keys Begin Shipping Tomorrow [Phoronix]

Google announced their new USB-C Titan Security Key will begin shipping tomorrow for offering two-factor authentication support with not only Android devices but all the major operating systems as well...

10:21

Apple Responds To Reports That It is Sharing Data With Tencent [Slashdot]

Over the weekend, reports emerged that claimed that Apple was sending users' browsing details to Tencent to run it against Chinese company's safe browsing feature. In a statement on Monday, an Apple spokesperson has offered a clarification: Apple protects user privacy and safeguards your data with Safari Fraudulent Website Warning, a security feature that flags websites known to be malicious in nature. When the feature is enabled, Safari checks the website URL against lists of known websites and displays a warning if the URL the user is visiting is suspected of fraudulent conduct like phishing. To accomplish this task, Safari receives a list of websites known to be malicious from Google, and for devices with their region code set to mainland China, it receives a list from Tencent. The actual URL of website you visit is never shared with a safe browsing provider and the feature can be turned off.

Read more of this story at Slashdot.

09:55

John Lennon says hello. Hello, hello... as Cancom buys Novosco for £70m [The Register]

You know, co-founder of the Belfast-based reseller

Munich-based Cancom Group is paying £70m to acquire public-sector reseller Novosco.…

09:33

Booking Holdings Is Latest to Pull Out of Libra Association [Slashdot]

Booking Holdings, an online travel company that operates Kayak.com and Priceline.com among other websites, said it's withdrawing from participation in the Libra Association, an ambitious and controversial Facebook-led project to create a new cryptocurrency. From a report: The Norwalk, Connecticut-based company joins PayPal, Stripe, Visa, Mastercard, MercadoLibre and EBay in leaving the project in the past two weeks. The plan has came under intense scrutiny from lawmakers and regulators who feared that Libra could be used for criminal purposes and undercut countries' monetary policy, among myriad other concerns.

Read more of this story at Slashdot.

08:46

Planting Tiny Spy Chips in Hardware Can Cost as Little as $200 [Slashdot]

An anonymous reader shares a report: More than a year has passed since Bloomberg Businessweek grabbed the lapels of the cybersecurity world with a bombshell claim: that Supermicro motherboards in servers used by major tech firms, including Apple and Amazon, had been stealthily implanted with a chip the size of a rice grain that allowed Chinese hackers to spy deep into those networks. Apple, Amazon, and Supermicro all vehemently denied the report. The NSA dismissed it as a false alarm. The Defcon hacker conference awarded it two Pwnie Awards, for "most overhyped bug" and "most epic fail." And no follow-up reporting has yet affirmed its central premise. But even as the facts of that story remain unconfirmed, the security community has warned that the possibility of the supply chain attacks it describes is all too real. The NSA, after all, has been doing something like it for years, according to the leaks of whistle-blower Edward Snowden. Now researchers have gone further, showing just how easily and cheaply a tiny, tough-to-detect spy chip could be planted in a company's hardware supply chain. And one of them has demonstrated that it doesn't even require a state-sponsored spy agency to pull it off -- just a motivated hardware hacker with the right access and as little as $200 worth of equipment.

Read more of this story at Slashdot.

08:36

ASRock Rack EPYCD8 Series Make For Great Value AMD EPYC Motherboards With Rome Support [Phoronix]

For those that have been interested in AMD's EPYC 7002 "Rome" processors for your own server build, more 7002 series supported motherboards have been hitting Internet stores in recent weeks. If you are looking for one of the lower-cost motherboards, ASRock Rack's EPYCD8 motherboards have been refined with 7001/7002 series processor support.

08:31

Her Majesty opens UK Parliament with fantastic tales of gigabit-capable broadband for everyone [The Register]

But without a majority, it's more likely to form basis of Tory manifesto than law

The UK government has promised to roll out new legislation to achieve nationwide "gigabit-capable broadband" among 26 bills set out in Parliament's State Opening today.…

08:05

San Francisco Wants to Require Companies To Get Permits Before Rolling Out 'Emerging Technology' [Slashdot]

Companies in San Francisco might soon to be required to get a permission slip from the city before rolling out their new innovations in public spaces. From a report: On Tuesday, Norman Yee, president of the city's Board of Supervisors, introduced a bill that would create the Office of Emerging Technology (OET). Entrepreneurs looking to deploy any emerging technology "upon, above, or below" city properties or public rights-of-way would need to first obtain a pilot permit from the OET's director. "As a city, we must ensure that such technologies ultimately result in a net common good and that we evaluate the costs and benefits so that our residents, workers and visitors are not unwittingly made guinea pigs of new tech," said Yee in a statement to the San Francisco Chronicle. Over the years San Francisco's tech companies have deployed all kinds of inventions in public spaces, including package delivery robots and dockless electric scooters. But because these innovations were, well, innovative, no specific rules initially existed to govern their use.

Read more of this story at Slashdot.

07:41

Visual Studio Code gets more touch-feely, new Windows Server builds arrive for brave admins [The Register]

Apple flogs Microsoft hardware and Puppet's CTO has a... notepad.exe tattoo?

Roundup  In a week that left the Windows Insider team facing a leadership vacuum after its Ninjacat-in-chief jumped ship, Microsoft's army of gnomes continued to toil ahead of the company's impending Ignite shindig.…

06:59

Tearoff of Nottingham: University to lose chunk of IT dept to outsourcing [The Register]

Cos that's always gone really well...

Exclusive  The University of Nottingham has announced it will outsource some of its IT operations in a long-awaited shakeup of the department.…

06:55

Saturday Morning Breakfast Cereal - Mathematicians [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Eternal gratitude to my patreon subscribers who realized that the original version of this had the wrong value. Nerds.


Today's News:

06:31

Swiss wheeze: Microsoft reseller titan SoftwareONE plots IPO on Zurich exchange [The Register]

If that floats your boat

SoftwareONE, one of the world's largest Microsoft resellers, has started pre-booking its shares ahead of an initial public offering on the Swiss stock exchange later this month.…

05:49

RIP: First space-walk badass Alexei Leonov, who made it to 85 despite best efforts of Soviet machine [The Register]

Looking back on Voskhod, Salyut, Soyuz, Apollo and having the right stuff

Obit  Alexei Leonov, the first man to float out of a capsule and into space, has died at the age of 85.…

05:34

G7 Taskforce Warns Global Crytocurrencies Like Libra Pose Risks, May Not Be Approved [Slashdot]

"Stablecoin" cryptocurrencies like Libra pose a risk to the global financial system, warns a new report by the G7 group of nations. An anonymous reader quotes the BBC: The G7 taskforce that produced the report includes senior officials from central banks, the International Monetary Fund and the Financial Stability Board, which coordinates rules for the G20 economies. It says backers of digital currencies like Libra must be legally sound, protect consumers and ensure coins are not used to launder money or fund terrorism.... The draft report says: "The G7 believe that no stablecoin project should begin operation until the legal, regulatory and oversight challenges and risks are adequately addressed...." The draft report outlines nine major risks posed by such digital currencies. It warns that even if Libra's backers address concerns, the project may not get approval from regulators... "Addressing such risks is not necessarily a guarantee of regulatory approval for a stablecoin arrangement," the report says.

Read more of this story at Slashdot.

05:29

LLVM "Stack Clash" Compiler Protection Is Under Review [Phoronix]

Two years after the "Stack Clash" vulnerability came to light, the LLVM compiler is working on adding protection against it similar to the GCC compiler mitigation...

05:00

SHADERed 1.2.3 Released With Support For 3D Textures & Audio Shaders [Phoronix]

SHADERed is the open-source, cross-platform project for creating and testing HLSL/GLSL shaders. While a version number of 1.2.3 may not seem like a big update, some notable additions can be found within this new SHADERed release...

04:58

Remember, remember, it's now called November: Windows 10 19H2 update has a name [The Register]

And a release date – sort of

Microsoft has given the next version of Windows 10 a name. 19H2 will now be known as the November 2019 Update and is due to land any day now.…

04:49

POCL 1.4 Released For Advancing OpenCL On CPUs - Now Supports LLVM 9.0 [Phoronix]

Version 1.4 has been released of POCL, the "Portable Computing Language" implementation that allows for a portable OpenCL implementation to be executed on CPUs as well as optionally targeting other accelerators via HSA or even CUDA devices...

04:37

Vulkan 1.1.125 Released With SPIR-V 1.4 Support [Phoronix]

Succeeding Vulkan 1.1.124 one week later is now Vulkan 1.1.125 with a lone new extension...

04:23

'Technical error' threatens Vodafone customers with four-figure roaming fees [The Register]

Bills as high as £9k, but don't worry – they're working on it

Vodafone has apologised for a "technical error" that left customers abroad facing thousands of pounds in roaming fees over the weekend.…

03:53

Private equity to gobble up Brit virus blocker Sophos for £3bn [The Register]

Will join Barracuda Networks, Veracode Software in Thoma Bravo's tum

Brit security software slinger Sophos has accepted an all-cash offer from US suitor private equity group Thoma Bravo of just over £3bn.…

03:19

Microsoft Teams: The good, the bad, and the ugly [The Register]

Why Teams is a key product despite its frustrations – and yes, a Linux client is on the way

Analysis  Microsoft continues to plug Teams as the "fastest growing application" in the company's history, though it is not sold separately, only as a feature of Office 365 (there is also a free version). At the same time, there are major feature gaps that are only now being plugged, and it is not easy to manage. What is the attraction?…

02:15

SUSE, what? Adoption's still growing, shrugs OpenStack Foundation [The Register]

Attention has shifted away from VMs, however, COO tells El Reg

OpenStack chief operating officer Mark Collier told The Reg that while SUSE's decision to abandon its OpenStack Cloud product is "obviously disappointing", adoption is "strong and growing".…

02:00

Use sshuttle to build a poor man’s VPN [Fedora Magazine]

Nowadays, business networks often use a VPN (virtual private network) for secure communications with workers. However, the protocols used can sometimes make performance slow. If you can reach reach a host on the remote network with SSH, you could set up port forwarding. But this can be painful, especially if you need to work with many hosts on that network. Enter sshuttle — which lets you set up a quick and dirty VPN with just SSH access. Read on for more information on how to use it.

The sshuttle application was designed for exactly the kind of scenario described above. The only requirement on the remote side is that the host must have Python available. This is because sshuttle constructs and runs some Python source code to help transmit data.

Installing sshuttle

The sshuttle application is packaged in the official repositories, so it’s easy to install. Open a terminal and use the following command with sudo:

$ sudo dnf install sshuttle

Once installed, you may find the manual page interesting:

$ man sshuttle

Setting up the VPN

The simplest case is just to forward all traffic to the remote network. This isn’t necessarily a crazy idea, especially if you’re not on a trusted local network like your own home. Use the -r switch with the SSH username and the remote host name:

$ sshuttle -r username@remotehost 0.0.0.0/0

However, you may want to restrict the VPN to specific subnets rather than all network traffic. (A complete discussion of subnets is outside the scope of this article, but you can read more here on Wikipedia.) Let’s say your office internally uses the reserved Class A subnet 10.0.0.0 and the reserved Class B subnet 172.16.0.0. The command above becomes:

$ sshuttle -r username@remotehost 10.0.0.0/8 172.16.0.0/16

This works great for working with hosts on the remote network by IP address. But what if your office is a large network with lots of hosts? Names are probably much more convenient — maybe even required. Never fear, sshuttle can also forward DNS queries to the office with the –dns switch:

$ sshuttle --dns -r username@remotehost 10.0.0.0/8 172.16.0.0/16

To run sshuttle like a daemon, add the -D switch. This also will send log information to the systemd journal via its syslog compatibility.

Depending on the capabilities of your system and the remote system, you can use sshuttle for an IPv6 based VPN. You can also set up configuration files and integrate it with your system startup if desired. If you want to read even more about sshuttle and how it works, check out the official documentation. For a look at the code, head over to the GitHub page.


Photo by Kurt Cotoaga on Unsplash.

01:34

NASA Consultant 'Convinced We Found Evidence of Life on Mars in the 1970s' [Slashdot]

"A consultant for NASA slammed the agency for deliberately ignoring the results of the experiment he handled that showed signs of alien life on Mars," reports the International Business Times. "According to the consultant, NASA refuses to conduct new life-detection tests on the Red Planet." Engineer Gilbert Levin served as a principal investigator on NASA's Viking missions, which sent two identical landers to Mars. For his role, Levin handled the missions' biological experiments known as Labeled Release (LR). These experiments focused on identifying living microorganisms on Mars. The experiments were sent to the Red Planet through the Viking 1 and Viking 2 missions in 1975.... "As the experiment progressed, a total of four positive results, supported by five varied controls, streamed down from the twin Viking spacecraft landed some 4,000 miles apart," Levin wrote in Scientific American. "The data curves signaled the detection of microbial respiration on the Red Planet," he continued. "The curves from Mars were similar to those produced by LR tests of soils on Earth. It seemed we had answered that ultimate question." Despite the results of the LR experiment, the findings were discarded by NASA due to the agency's previous experiment on Mars. More from Levin's article in Scientific American: Life on Mars seemed a long shot. On the other hand, it would take a near miracle for Mars to be sterile. NASA scientist Chris McKay once said that Mars and Earth have been "swapping spit" for billions of years, meaning that, when either planet is hit by comets or large meteorites, some ejecta shoot into space. A tiny fraction of this material eventually lands on the other planet, perhaps infecting it with microbiological hitch-hikers.

Read more of this story at Slashdot.

01:16

Intel Firmware Binaries Land For AX200/AX201 Bluetooth Linux Support [Phoronix]

With devices beginning to hit store shelves using the new Intel WiFi 6 AX200 series chipsets, the firmware binaries have landed in linux-firmware.git for rounding out support for these latest WiFi/Bluetooth adapters...

01:10

Lies, damn lies, and KPIs: Let's not fix the formula until we have someone else to blame [The Register]

When 95 + (5 * RAND()) is all your spreadsheet needs

Who, Me?  Monday has arrived once again and with it the sweet, sweet music of a reader's darkest IT misdeeds in The Register's weekly Who, Me? feature.…

00:30

State of play with NVMe: We asked, you spoke, we listened – here's what you had to say [The Register]

Storage is no longer 'snorage'

Survey results  Storage is no longer snorage. And long gone are the days when enterprise storage could be taken for granted, or at least forgotten about until either users noticed access wasn’t speedy enough or the IT team realized space was running out.…

00:04

Robocop needs reboot, $200m for AI research, UK govt knowingly deployed racist passport system – plus more [The Register]

Read the latest in the amusing world of AI

Roundup  It's another Reg summary of recent AI news.…

Sunday, 13 October

23:03

Imperva cloud firewall pwned, D-Link bug uncovered – plus more [The Register]

Including: Visual Studio Code debug hole found

Roundup  It's time for another security news catch-up.…

22:34

Apple's Safari Browser Is Sending Some Users' IP Addresses To China's Tencent [Slashdot]

"Apple, which often positions itself as a champion of privacy and human rights, is sending some IP addresses from users of its Safari browser on iOS to Chinese conglomerate Tencent -- a company with close ties to the Chinese Communist Party," reports the Reclaim the Net blog: Apple admits that it sends some user IP addresses to Tencent in the "About Safari & Privacy" section of its Safari settings.... The "Fraudulent Website Warning" setting is toggled on by default which means that unless iPhone or iPad users dive two levels deep into their settings and toggle it off, their IP addresses may be logged by Tencent or Google when they use the Safari browser. However, doing this makes browsing sessions less secure and leaves users vulnerable to accessing fraudulent websites... Even if people install a third-party browser on their iOS device, viewing web pages inside apps still opens them in an integrated form of Safari called Safari View Controller instead of the third-party browser. Tapping links inside apps also opens them in Safari rather than a third-party browser. These behaviors that force people back into Safari make it difficult for people to avoid the Safari browser completely when using an iPhone or iPad. Engadget adds that it's "not clear" whether or not Tencent is actually collecting IP addresses from users outside of China. ("You'll see mention of the collection in the U.S. disclaimer, but that doesn't mean it's scooping up info from American web surfers.") But Reclaim the Net points out that the possibility is troubling, in part because Safari is the #1 most popular mobile internet browser in America, with a market share of over 50%.

Read more of this story at Slashdot.

20:59

Was Flash Responsible For 'The Internet's Most Creative Era'? [Slashdot]

A new article this week on Motherboard argues that Flash "is responsible for the internet's most creative era," citing a new 640-page book by Rob Ford on the evolution of web design. [O]ne could argue that the web has actually gotten less creative over time, not more. This interpretation of events is a key underpinning of Web Design: The Evolution of the Digital World 1990-Today (Taschen, $50), a new visual-heavy book from author Rob Ford and editor Julius Wiedemann that does something that hasn't been done on the broader internet in quite a long time: It praises the use of Flash as a creative tool, rather than a bloated malware vessel, and laments the ways that visual convention, technical shifts, and walled gardens have started to rein in much of this unvarnished creativity. This is a realm where small agencies supporting big brands, creative experimenters with nothing to lose, and teenage hobbyists could stand out simply by being willing to try something risky. It was a canvas with a built-in distribution model. What wasn't to like, besides a whole host of malware? The book's author tells Motherboard that "Without the rebels we'd still be looking at static websites with gray text and blue hyperlinks." But instead we got wild experiments like Burger King's "Subservient Chicken" site or the interactive "Wilderness Downtown" site coded by Google. There were also entire cartoon series like Radiskull and Devil Doll or Zombie College -- not to mention games like "A Murder of Scarecrows" or the laughably unpredictible animutations of 14-year-old Neil Cicierega. But Ford tells Motherboard that today, many of the wild ideas have moved from the web to augmented reality and other "physical mediums... The rise in interactive installations, AR, and experiential in general is where the excitement of the early days is finally happening again." Motherboard calls the book "a fitting coda for a kind of digital creativity that -- like Geocities and MySpace pages, multimedia CD-ROMs, and Prodigy graphical interfaces before it -- has faded in prominence."

Read more of this story at Slashdot.

19:36

Wired Remembers the Glory Days of Flash [Slashdot]

Wired recently remembered Flash as "the annoying plugin" that transformed the web "into a cacophony of noise, colour, and controversy, presaging the modern web." They write that its early popularity in the mid-1990s came in part because "Microsoft needed software capable of showing video on their website, MSN.com, then the default homepage of every Internet Explorer user." But Flash allowed anyone to become an animator. (One Disney artist tells them that Flash could do in three days what would take a professional animator 7 months -- and cost $10,000.) Their article opens in 2008, a golden age when Flash was installed on 98% of desktops -- then looks back on its impact: The online world Flash entered was largely static. Blinking GIFs delivered the majority of online movement. Constructed in early HTML and CSS, websites lifted clumsily from the metaphors of magazine design: boxy and grid-like, they sported borders and sidebars and little clickable numbers to flick through their pages (the horror). Flash changed all that. It transformed the look of the web... Some of these websites were, to put it succinctly, absolute trash. Flash was applied enthusiastically and inappropriately. The gratuitous animation of restaurant websites was particularly grievous -- kitsch abominations, these could feature thumping bass music and teleporting ingredients. Ishkur's 'guide to electronic music' is a notable example from the era you can still view -- a chaos of pop arty lines and bubbles and audio samples, it looks like the mind map of a naughty child... In contrast to the web's modern, business-like aesthetic, there is something bizarre, almost sentimental, about billion-dollar multinationals producing websites in line with Flash's worst excess: long loading times, gaudy cartoonish graphics, intrusive sound and incomprehensible purpose... "Back in 2007, you could be making Flash games and actually be making a living," remembers Newgrounds founder Tom Fulp, when asked about Flash's golden age. "That was a really fun time, because that's kind of what everyone's dream is: to make the games you want and be able to make a living off it." Wired summarizes Steve Jobs' "brutally candid" diatribe against Flash in 2010. "Flash drained batteries. It ran slow. It was a security nightmare. He asserted that an era had come to an end... '[T]he mobile era is about low power devices, touch interfaces and open web standards -- all areas where Flash falls short.'" Wired also argues that "It was economically viable for him to rubbish Flash -- he wanted to encourage people to create native games for iOS." But they also write that today, "The post-Flash internet looks different. The software's downfall precipitated the rise of a new aesthetic...one moulded by the specifications of the smartphone and the growth of social media," favoring hits of information rather than striving for more immersive, movie-emulating thrills. And they add that though Newgrounds long-ago moved away from Flash, the site's founder is now working on a Flash emulator to keep all that early classic content playable in a browser.

Read more of this story at Slashdot.

18:28

Linux 5.4-rc3 Released Ahead Of Official Kernel Debut In November [Phoronix]

Linus Torvalds has just issued the third weekly release candidate of the forthcoming Linux 5.4 kernel that should debut as stable before the end of November...

17:41

Can A New TED-Ed Video Series Teach Students To 'Think Like A Coder'? [Slashdot]

An anonymous reader writes: TED Conferences has its own educational YouTube channel (now with 10 million subscribers and over 1.5 billion views). Two weeks ago it launched a 10-episode animated series about computer programming, and its first episode -- The Prison Break -- has already been viewed nearly a quarter of a milllion times. In the 7-minute video, a programmer wakes up in a prison cell -- with total amnesia -- and discovers a "mysterious stranger" squeezing through the jail cell's bars. It's a floating anthropomorphic drone, saying it needs the programmer's help to rescue a dystopian future world "in turmoil. Robots have taken over." The video introduces the computer programming concept of a loop -- since escaping the jail cell involves testing a key in every possible position. And the video's page on the TED-Ed web site offers links to related resources from Code.org and Free Code Camp, as well as from Advent of Code, "which is run by Eric Wastl, who consulted extensively on Think Like a Coder and inspired quite a few of the puzzles." The episode ends with the programmer dangling from the flying drone, off on an attempt to recover three artifacts -- nodes of memory, power, and creation -- that are currently being used for "nefarious purposes."

Read more of this story at Slashdot.

16:46

Researchers Prove Humans Are Still Better Than AI at 'Angry Birds' [Slashdot]

An anonymous reader quotes the I-Programmer site: Humans! Rest easy, we still beat the evil AI at the all-important Angry Birds game. Recent research by Ekaterina Nikonova and Jakub Gemrot of Charles University (Czech Republic) indicates why this is so.... "Firstly, this game has a large number of possibilities of actions and nearly infinite amount of possible levels, which makes it difficult to use simple state space search algorithms for this task. Secondly, the game requires a planning of sequences of actions, which are related to each other... For example, a poorly chosen first action can make a level unsolvable by blocking a pig with a pile of objects. Therefore, to successfully solve the task, a game agent should be able to predict or simulate the outcome of it is own actions a few steps ahead." The researchers also report that the game requires AI to distinguish "between multiple birds, their abilities and optimum tapping times..." "Despite the fact we have come close to a human-level performance on selected 21 levels, we still lost to 3 out of 4 humans in obtaining a maximum possible total score."

Read more of this story at Slashdot.

15:56

NVIDIA's Job Listings Reveal 'Game Remastering' Studio, New Interest In RISC-V [Slashdot]

An anonymous reader quotes Forbes: Nvidia has a lot riding on the success of its GeForce RTX cards. The Santa Clara, California company is beating the real-time ray tracing drum loudly, adamant on being known as a champion of the technology before AMD steals some of its thunder next year with the PlayStation 5 and its own inevitable release of ray-tracing enabled PC graphics cards. Nvidia has shown that, with ray tracing, it can breathe new life into a decades-old PC shooter like id Software's Quake 2, so why not dedicate an entire game studio to remastering timeless PC classics? A new job listing spotted by DSOGaming confirms that's exactly what Nvidia is cooking up. The ad says NVIDIA's new game remastering program is "cherry-picking some of the greatest titles from the past decades and bringing them into the ray tracing age, giving them state-of-the-art visuals while keeping the gameplay that made them great." (And it adds that the initiative is "starting with a title that you know and love but we can't talk about here!") Meanwhile, a China-based industry watcher on Medium reports that "six RISC-V positions have been advertised by NVIDIA, based in Shanghai and pertaining to architecture, design, and verification."

Read more of this story at Slashdot.

14:55

Millions Watch As Entire Fortnite Ecosystem Becomes a Black Hole [Slashdot]

"Fortnite just blew up its entire map and all that's left is a black hole," reports TechCrunch. Some are speculating that this is simply a teaser for a new Fortnite map, but it's unclear when that new map will arrive... Fortnite's website is currently just a Twitch stream featuring a black hole. The Washington Post reports: Anyone looking for clues on Fortnite's multiple social media accounts were left staring at the same image. The same black hole greets all visitors to Fortnite's Instagram. And intrepid players discovered that inputting the infamous "Konami code" launches a Galaga-style shooting game starring the mascot of Greasy Grove restaurant Durrr Burger.... As the event happened, many Twitch users reported having trouble using the popular streaming service, with more than 4 million people watching the event. Millions more tuned in on YouTube and Twitter, as well.... Rumors have swirled that the famous Fortnite map was going to be completely replaced, and given that everything's now gone, it sounds plausible... Fortnite's Season 10 has been expected to end soon, and since last year, spectacular one-time live events within the game have been used to build hype, signal changes to the one map the game has used for two years, and usher in a new season and battle pass. This time, players who logged in at 2 p.m. Eastern time witnessed a rocket launch from the Dusty Divot area of the island, which turned into multiple rockets, all zipping around in a manner similar to the rocket that players saw in the first season-ending live event in Season 4. The rockets then converged onto an area where a meteor was landing, and the collision caused players to fly up into the air to witness a black hole suck the entirety of the game inside. And since then, players have been left with nothing but the black hole.

Read more of this story at Slashdot.

13:51

Study: Many Popular Medical Apps Send User Info To 3rd Or 4th Parties [Slashdot]

dryriver writes: A study in the British Medical Journal that looked at 24 of the 100s of Medical apps available on Google Play found that 79% pass all sorts of user info -- including sensitive medical info like what your reported symptoms are and what medications you are taking in some cases -- on to third and fourth parties. A German-made and apparently very popular medical app named Ada was found to share user data with trackers like Facebook, Adjust and Amplitude for example. [Click here for the article in German.] The New York Times also warned recently about apps that want to retrieve/store your medical records. From the conclusion of the study: "19/24 (79%) of sampled apps shared user data. 55 unique entities, owned by 46 parent companies, received or processed app user data, including developers and parent companies (first parties) and service providers (third parties). 18 (33%) provided infrastructure related services such as cloud services. 37 (67%) provided services related to the collection and analysis of user data, including analytics or advertising, suggesting heightened privacy risks. Network analysis revealed that first and third parties received a median of 3 (interquartile range 1-6, range 1-24) unique transmissions of user data. Third parties advertised the ability to share user data with 216 "fourth parties"; within this network (n=237), entities had access to a median of 3 (interquartile range 1-11, range 1-140) unique transmissions of user data. Several companies occupied central positions within the network with the ability to aggregate and re-identify user data."

Read more of this story at Slashdot.

12:36

Invisible Hardware Hacks Allowing Full Remote Access Cost Pennies [Slashdot]

Long-time Slashdot reader Artem S. Tashkinov quotes Wired: More than a year has passed since Bloomberg Businessweek grabbed the lapels of the cybersecurity world with a bombshell claim: that Supermicro motherboards in servers used by major tech firms, including Apple and Amazon, had been stealthily implanted with a chip the size of a rice grain that allowed Chinese hackers to spy deep into those networks. Apple, Amazon, and Supermicro all vehemently denied the report. The NSA dismissed it as a false alarm. The Defcon hacker conference awarded it two Pwnie Awards, for "most overhyped bug" and "most epic fail." And no follow-up reporting has yet affirmed its central premise. But even as the facts of that story remain unconfirmed, the security community has warned that the possibility of the supply chain attacks it describes is all too real. The NSA, after all, has been doing something like it for years, according to the leaks of whistle-blower Edward Snowden. Now researchers have gone further, showing just how easily and cheaply a tiny, tough-to-detect spy chip could be planted in a company's hardware supply chain. And one of them has demonstrated that it doesn't even require a state-sponsored spy agency to pull it off -- just a motivated hardware hacker with the right access and as little as $200 worth of equipment.

Read more of this story at Slashdot.

11:39

The UK's National Health System Just Opened A Treatment Center for Videogame Addiction [Slashdot]

An anonymous reader quotes Fortune: The battle against gaming addiction entered a new era this week when the U.K. public health system, the National Health Service (NHS), announced the opening of its first center specializing in 'Internet and Gaming Disorders....' Starting in November, the London-based center's psychiatrists and clinical psychologists will work with patients between ages 13 and 25 whose lives have been affected by "severe or complex behavioral issues associated with gaming, gambling and social media," the NHS said in a release... [T]he U.K. center is meant to fill a gap in mental health treatment that was previously occupied by private programs and more generalized NHS mental health services. "We are inundated. We have got sixty referrals already," says Dr. Henrietta Bowden-Jones of the addictions faculty at the Royal College of Psychiatrists, who serves as director of the National Centre for Internet and Gaming Addictions where the new clinic will be located.... Other European clinics have seen similarly desperate growth. The Yes We Can clinic on the outskirts of Eindhoven, Netherlands, for instance, treated 250 children for gaming addiction in 2018 and has so far treated 450 in 2019 -- including 50 from the U.K... Dr. Bowden-Jones says that she expects that a relatively small percentage of gamers will suffer the medically recognized disorder -- no more than 2% -- but that the issue is important to address because about 75% of young people in the U.K. engage in gaming.

Read more of this story at Slashdot.

11:05

Ubuntu 19.10 Provides Good Out-Of-The-Box Support For The Dell XPS 7390 Icelake Laptop [Phoronix]

For those not following on Twitter, recently I picked up one of the new Dell XPS 7390 laptops for finally being able to deliver Linux benchmarks from Intel Ice Lake! Yes, it's real and running under Linux! For those eyeing the Dell XPS 7390 with this being the first prominent laptop with Ice Lake, here is a brief look at the initial experience with using Ubuntu 19.10.

10:34

Google Takes AMP to the OpenJS Foundation [Slashdot]

An anonymous reader quotes TechCrunch: AMP, Google's somewhat controversial project for speeding up the mobile web, has always been open-source, but it also always felt like a Google project first. Thursday, however, Google announced that the AMP framework will join the OpenJS Foundation, the Linux Foundation-based group that launched last year after the merger of the Node.js and JS foundations. The OpenJS Foundation is currently the home of projects like jQuery, Node.js and webpack, and AMP will join the Foundation's incubation program... Google also notes that the OpenJS Foundation's goal of promoting JavaScript and related technologies is a good fit for AMP's mission of providing "a user-first format for web content." The company also notes that the Foundation allows projects to maintain their identities and technical focus and stresses that AMP's governance model was already influenced by the JS Foundation and Node.js Foundation. Google is currently a top-level platinum member of the OpenJS Foundation and will continue to support the project and employ a number of engineers that will work on AMP full-time.

Read more of this story at Slashdot.

09:34

IRS Programmer Stole Identities, Funded A Two-Year Shopping Spree [Slashdot]

A computer programmer at America's tax-collecting agency "stole multiple people's identities, and used them to open illicit credit cards to fund vacations and shop for shoes and other goods," write Quartz, citing a complaint unsealed last week in federal court. An anonymous reader quotes their report: The complaint accuses the 35-year-old federal worker of racking up almost $70,000 in charges over the course of two years, illegally using "the true names, addresses, dates of birth, and Social Security numbers" of at least three people. The US Treasury Department's Inspector General for Tax Administration, which oversees internal wrongdoing at the Internal Revenue Service (IRS), is investigating the crime, although the complaint doesn't specify how the employee obtained the information. The arrest, however, comes just months after the Government Accountability Office -- the federal government's auditor, essentially -- issued a report raising concerns about the security of taxpayer information held at the IRS. The report said that unaddressed shortcomings left taxpayer data "unnecessarily vulnerable to inappropriate and undetected use, modification, or disclosure," which could allow employees or outsiders to illegally access millions of people's personal information. An IRS call center employee in Atlanta pleaded guilty last year to illegally using taxpayer data to file fraudulent tax returns, ultimately collecting almost $6,000. In 2016, another IRS worker in Atlanta admitted to improperly accessing the personal information of two taxpayers, amassing close to half a million dollars from illicit tax refunds.... The IRS employee's alleged scheme took place between January 2016 and February 2018, according to court filings. Investigators say he used a fraudulently obtained American Express card to fly to Sacramento and Miami Beach. He also used the card for some 37 Uber rides, nine payments on his father's Amazon account totaling $1,200, various purchases at Lowe's, the Designer Shoe Warehouse, BJ's Wholesale Club, and a flooring outlet, as well as a $7,400 payment to a business he owned. The complaint says the employee, who works for the tax agency as a software developer, obtained a second fraudulent credit card, which he used to fly to Montego Bay, Jamaica. A third fraudulent card was used to travel to Iceland. In a particularly brazen move, investigators say the suspect linked this card to a phony PayPal account he opened using his official IRS email address. Two of the credit cards were delivered to his home address, while a third was sent to his parents' address, according to the article. "The phone numbers listed on the accounts also belonged to the suspect, and he accessed emails associated with the accounts from his home IP address."

Read more of this story at Slashdot.

08:40

Saturday Morning Breakfast Cereal - Orbit [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
After extensive research it turns out that three things is too much things.


Today's News:

Hey, if you're interested in the Open Borders book, we're auctioning a signed copy!

08:34

New Chrome Feature Will Use AI To Describe Unlabelled Images To The Vision-Impaired [Slashdot]

An anonytmous reader quotes TechSpot: Google is looking to improve the web-browsing experience for those with vision conditions by introducing a feature into its Chrome browser that uses machine learning to recognize and describe images. The image description will be generated automatically using the same technology that drives Google Lens... The text descriptions use the phrase "appears to be" to let users know that it is a description of an image. So, for example, Chrome might say, "Appears to be a motorized scooter." This will be a cue to let the person know that it is a description generated by the AI and may not be completely accurate. The feature is only available for those with screen readers or Braille displays. "The unfortunate state right now is that there are still millions and millions of unlabeled images across the web," explains Google's senior accessbility program manager. "When you're navigating with a screen reader or a Braille display, when you get to one of those images, you'll actually just basically hear 'image' or 'unlabeled graphic,' or my favorite, a super long string of numbers which is the file name, which is just totally irrelevant."

Read more of this story at Slashdot.

07:46

WireGuard 0.0.20191012 Released With Latest Fixes [Phoronix]

WireGuard is still working on transitioning to the Linux kernel's existing crypto API as a faster approach to finally make it into the mainline kernel, but for those using the out-of-tree WireGuard secure VPN tunnel support, a new development release is available...

07:34

Microsoft's New Keyboards Have Dedicated Keys For 'Office' and Emojis [Slashdot]

"Microsoft's latest keyboards now include dedicated Office and emoji keys," reports the Verge: The software giant was previously experimenting with an Office key on keyboards earlier this year, and now the company is launching a new Ergonomic and slim Bluetooth Keyboard that include the dedicated button. The Office key replaces the right-hand Windows key, and it's used to launch the Office for Windows 10 app that acts as a hub for Microsoft's productivity suite. You can also use the Office key as a shortcut to launch Word, Excel, PowerPoint, and more. Office key + W opens Word for example, while Office key + X opens Excel. Alongside the Office key, there's also a new emoji key on these new keyboards. It will launch the emoji picker inside Windows 10, but you won't be able to assign it to a specific emoji or even create shortcuts, unfortunately... Microsoft quietly launched these new keyboards at the company's Surface hardware event last week, but they'll be available in stores on October 15th.

Read more of this story at Slashdot.

06:59

OpenSUSE's OBS Can Now Spin Windows Subsystem for Linux Images [Phoronix]

The openSUSE's Open Build Service (OBS) has been picking up the ability to build Windows Subsystem for Linux (WSL) images for those wishing to craft their own WSL distribution or just rebuild openSUSE from source as a reproducible/verifiable build...

05:34

New OpenLibra Cryptocurrency: Like Libra, But Not Run By Facebook [Slashdot]

"While Facebook's upcoming cryptocurrency Libra struggles to keep partners on board and regulators happy, an alternative called OpenLibra is here to address some of Libra's potential shortcomings," reports Mashable: Announced at Ethereum Foundation's Devcon 5 conference in Osaka, Japan, OpenLibra is described as an "open platform for financial inclusion," with a telling tagline: "Not run by Facebook." OpenLibra aims to be compatible with Libra in a technical sense, meaning someone building an app on the Libra platform should be able to easily deploy it to OpenLibra as well. OpenLibra's token's value will be pegged to the value of the Libra token. But while Libra will be a permissioned blockchain (meaning, roughly, that only permitted parties will be able to run a Libra node), OpenLibra will be permissionless from the start. There's an important difference in governance, too. Libra will initially be run by a foundation comprised of up to a 100 corporations and non-profits. It's not entirely clear how OpenLibra will be governed, but the 26-strong "core team" of the project includes people related to cryptocurrency projects such as Ethereum and Cosmos.

Read more of this story at Slashdot.

05:32

Vulkan To Better Handle Variable Rate Displays / Adaptive-Sync In The Future [Phoronix]

While longtime X11 developer Keith Packard is now working for SiFive on RISC-V processors by day, he's still involved in the Linux graphics world through his contract work for Valve. At the XDC2019 conference earlier this month he presented on display timing, the current Linux plumbing for it, and also bringing up Vulkan will better support variable rate displays in the future...

05:11

GNOME's Mutter 3.35.1 Fixes The Night Light Mode On Wayland [Phoronix]

With many of the prominent fixes that we've talked about for GNOME Shell and Mutter since last month's 3.34 release having been back-ported to 3.34.1, this weekend's release of GNOME Shell 3.35.1 and Mutter 3.35.1 as the first steps towards GNOME 3.36 aren't all that big. But at least in the case of this new Mutter development release are some worthwhile fixes...

04:50

Godot's Vulkan Renderer Is Getting Into Increasingly Good Shape [Phoronix]

Lead developer of the open-source Godot 2D/3D game engine Juan Linietsky has continued working daily on the engine's Vulkan renderer ahead of Godot 4.0...

04:41

KDE Plasma 5.17 Seeing Last Minute Bug Fixing [Phoronix]

With KDE Plasma 5.17 releasing soon, it's been seeing a lot of last minute fixes while feature activity is also brewing around Plasma 5.18...

01:34

Bell Labs Plans Big 50th Anniversary Event For Unix [Slashdot]

Photographer Peter Adams launched a "Faces of Open Source" portrait project in 2014. This week he posted a special announcement on the web site of Bell Labs: Later this month, Bell Labs will celebrate the 50th anniversary of Unix with a special two day "Unix 50" event at their historic Murray Hill headquarters. This event should be one for the history books with many notable Unix and computer pioneers in attendance...! As I was making those photographs (which will be on display at the event), I gained much insight into Bell Labs and the development of Unix. However, it was some of the more personal stories and anecdotes that brought Bell Labs to life and gave me a feel for the people behind the code. One such time was when Ken Thompson (who is an accomplished pilot) told me how he traveled to Russia after the fall of the Soviet Union in order to fly in a MiG-29 fighter jet... Brian Kernighan told me about how a certain portrait of Peter Weinberger found its way into some very interesting places. These included the concrete foundation of a building on Bell Labs campus, the cover images printed onto Unix CD-ROMs, and most notably, painted on the top of a nearby water tower. Which brings us to another important piece of Unix mythology that I learned about: the fictitious Bell Labs employee G.R. Emlin (a.k.a. "the gremlin").... A lot of this folklore (including the gremlin) is going to be on display at the Unix 50 event. The archivists at Bell Labs have outdone themselves by pulling together a massive collection of artifacts taken from the labs where Unix was developed for over 30 years. I was able to photograph a few of these artifacts last year, but so much more will be exhibited at this event -- including several items from the personal archives of some attendees. As if that wasn't enough, the event will also showcase a number of vintage computers and a look into Bell Labs future with a tour of their Future X Labs. The article includes some more quick stories about the Unix pioneers at Bell Labs (including "the gremlin") and argues that "the decision to freely distribute Unix's source code (to anyone who asked for it) inadvertently set the stage for the free and open source software movements that dominate the technology industry today... "In hindsight, maybe 1969 should be called the 'summer of code.'"

Read more of this story at Slashdot.

Saturday, 12 October

22:34

'There's an Automation Crisis Underway Right Now, It's Just Mostly Invisible' [Slashdot]

"There is no 'robot apocalypse', even after a major corporate automation event," writes Gizmodo, citing something equally ominous in new research by a team of economists. merbs shared their report: Instead, automation increases the likelihood that workers will be driven away from their previous jobs at the companies -- whether they're fired, or moved to less rewarding tasks, or quit -- and causes a long-term loss of wages for the employee. The report finds that "firm-level automation increases the probability of workers separating from their employers and decreases days worked, leading to a 5-year cumulative wage income loss of 11 percent of one year's earnings." That's a pretty significant loss. Worse still, the study found that even in the Netherlands, which has a comparatively generous social safety net to, say, the United States, workers were only able to offset a fraction of those losses with benefits provided by the state. Older workers, meanwhile, were more likely to retire early -- deprived of years of income they may have been counting on. Interestingly, the effects of automation were felt similarly through all manner of company -- small, large, industrial, services-oriented, and so on. The study covered all non-finance sector firms, and found that worker separation and income loss were "quite pervasive across worker types, firm sizes and sectors." Automation, in other words, forces a more pervasive, slower-acting and much less visible phenomenon than the robots-are-eating-our-jobs talk is preparing us for. "People are focused on the damage of automation being mass unemployment," study author James Bessen, an economist at Boston University, tells me in an interview. "And that's probably wrong...." According to Bessen, compared to firms that have not automated, the rate of workers leaving their jobs is simply higher, though from the outside, it can resemble more straightforward turnover. "But it's more than attrition," he says. "A much greater percentage -- 8 percent more -- are leaving." And some never come back to work. "There's a certain percentage that drop out of the labor force. That five years later still haven't gotten a job." The result, Bessen says, is an added strain on the social safety net that it is currently woefully unprepared to handle.

Read more of this story at Slashdot.

19:34

Should High School Computer Science Classes Count as a Math Credit? [Slashdot]

"In a widely-reprinted essay, Ohio State University assistant professor of physics Chris Orban ponders whether the tech world did students a favor or disservice by getting states to count computer science as high school math credit," writes long-time Slashdot reader theodp. The assistant physics professor writes: In 2013, a who's who of the tech world came together to launch a new nonprofit called Code.org. The purpose of the organization was to get more computer science into schools. Billionaires like Mark Zuckerberg and Bill Gates donated millions of dollars to the group. According to the organization's last annual report...$6.9 million went to advocate for state legislation across the country. As part of the organization's mission to "make computer science count" in K-12 education, code.org takes credit for having influenced graduation policies in 42 states. Today, 47 states and the District of Columbia allow computer science classes to count in place of math classes like Algebra 2. Prior to the organization's work, only a few states allowed computer science to count for math credit. In addition, 29 states passed legislation allowing computer science to count in place of a science course. When computer science begins to count as math or science, it makes sense to ask if these changes are helping America's students or hurting them... I worry that students may take computer science just to avoid the more difficult math and science courses they need for college. Computer science could be a way for students to circumvent graduation requirements while adults look the other way.... Computer science advocates have created a kind of national experiment. The next few years will show if this was a good idea, but only if we're looking at more than just the numbers of students taking computer science.

Read more of this story at Slashdot.

17:34

Apple Told Some Apple TV+ Show Developers Not To Anger China [Slashdot]

An anonymous reader quotes BuzzFeed News: In early 2018 as development on Apple's slate of exclusive Apple TV+ programming was underway, the company's leadership gave guidance to the creators of some of those shows to avoid portraying China in a poor light, BuzzFeed News has learned. Sources in position to know said the instruction was communicated by Eddy Cue, Apple's SVP of internet software and services, and Morgan Wandell, its head of international content development. It was part of Apple's ongoing efforts to remain in China's good graces after a 2016 incident in which Beijing shut down Apple's iBooks Store and iTunes Movies six months after they debuted in the country. A spokesperson for Apple declined comment. Apple's tip toeing around the Chinese government isn't unusual in Hollywood. It's an accepted practice. "They all do it," one showrunner who was not affiliated with Apple told BuzzFeed News. "They have to if they want to play in that market. And they all want to play in that market. Who wouldn't?"

Read more of this story at Slashdot.

10:32

GNU Binutils 2.33.1 Released With Support For Newer Arm Cortex CPUs, SVE2/TME/MVE [Phoronix]

GNU Binutils 2.33 was tagged in Git two weeks ago but seemingly without any release announcement while now Binutils 2.33.1 has been released...

09:34

Saturday Morning Breakfast Cereal - Where [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
It's actually a test. The ones who 'fail' to find Waldo are chosen for a secret agency, battling for truth, run by Waldo himself.


Today's News:

08:45

Red Hat's New Graphics Engineer Is A Longtime AMD/ATI Linux Developer [Phoronix]

Red Hat had been looking to hire another experienced open-source graphics driver developer and for that their newest member on their growing open-source graphics team is a longtime AMD/ATI developer...

07:00

It's crowded in here! [The Cloudflare Blog]

It's crowded in here!

We recently gave a presentation on Programming socket lookup with BPF at the Linux Plumbers Conference 2019 in Lisbon, Portugal. This blog post is a recap of the problem statement and proposed solution we presented.

It's crowded in here!
CC0 Public Domain, PxHere

Our edge servers are crowded. We run more than a dozen public facing services, leaving aside the all internal ones that do the work behind the scenes.

Quick Quiz #1: How many can you name? We blogged about them! Jump to answer.

These services are exposed on more than a million Anycast public IPv4 addresses partitioned into 100+ network prefixes.

To keep things uniform every Cloudflare edge server runs all services and responds to every Anycast address. This allows us to make efficient use of the hardware by load-balancing traffic between all machines. We have shared the details of Cloudflare edge architecture on the blog before.

It's crowded in here!

Granted not all services work on all the addresses but rather on a subset of them, covering one or several network prefixes.

So how do you set up your network services to listen on hundreds of IP addresses without driving the network stack over the edge?

Cloudflare engineers have had to ask themselves this question more than once over the years, and the answer has changed as our edge evolved. This evolution forced us to look for creative ways to work with the Berkeley sockets API, a POSIX standard for assigning a network address and a port number to your application. It has been quite a journey, and we are not done yet.

When life is simple - one address, one socket

It's crowded in here!

The simplest kind of association between an (IP address, port number) and a service that we can imagine is one-to-one. A server responds to client requests on a single address, on a well known port. To set it up the application has to open one socket for each transport protocol (be it TCP or UDP) it wants to support. A network server like our authoritative DNS would open up two sockets (one for UDP, one for TCP):

(192.0.2.1, 53/tcp) ⇨ ("auth-dns", pid=1001, fd=3)
(192.0.2.1, 53/udp) ⇨ ("auth-dns", pid=1001, fd=4)

To take it to Cloudflare scale, the service is likely to have to receive on at least a /20 network prefix, which is a range of IPs with 4096 addresses in it.

It's crowded in here!

This translates to opening 4096 sockets for each transport protocol. Something that is not likely to go unnoticed when looking at ss tool output.

$ sudo ss -ulpn 'sport = 53'
State  Recv-Q Send-Q  Local Address:Port Peer Address:Port
…
UNCONN 0      0           192.0.2.40:53        0.0.0.0:*    users:(("auth-dns",pid=77556,fd=11076))
UNCONN 0      0           192.0.2.39:53        0.0.0.0:*    users:(("auth-dns",pid=77556,fd=11075))
UNCONN 0      0           192.0.2.38:53        0.0.0.0:*    users:(("auth-dns",pid=77556,fd=11074))
UNCONN 0      0           192.0.2.37:53        0.0.0.0:*    users:(("auth-dns",pid=77556,fd=11073))
UNCONN 0      0           192.0.2.36:53        0.0.0.0:*    users:(("auth-dns",pid=77556,fd=11072))
UNCONN 0      0           192.0.2.31:53        0.0.0.0:*    users:(("auth-dns",pid=77556,fd=11071))
…
It's crowded in here!
CC BY 2.0, Luca Nebuloni, Flickr

The approach, while naive, has an advantage: when an IP from the range gets attacked with a UDP flood, the receive queues of sockets bound to the remaining IP addresses are not affected.

Life can be easier - all addresses, one socket

It's crowded in here!

It seems rather silly to create so many sockets for one service to receive traffic on a range of addresses. Not only that, the more listening sockets there are, the longer the chains in the socket lookup hash table. We have learned the hard way that going in this direction can hurt packet processing latency.

The sockets API comes with a big hammer that can make our life easier - the INADDR_ANY aka 0.0.0.0 wildcard address. With INADDR_ANY we can make a single socket receive on all addresses assigned to our host, specifying just the port.

s = socket(AF_INET, SOCK_STREAM, 0)
s.bind(('0.0.0.0', 12345))
s.listen(16)

Quick Quiz #2: Is there another way to bind a socket to all local addresses? Jump to answer.

In other words, compared to the naive “one address, one socket” approach, INADDR_ANY allows us to have a single catch-all listening socket for the whole IP range on which we accept incoming connections.

In Linux this is possible thanks to a two-phase listening socket lookup, where it falls back to search for an INADDR_ANY socket if a more specific match has not been found.

It's crowded in here!

Another upside of binding to 0.0.0.0 is that our application doesn’t need to be aware of what addresses we have assigned to our host. We are also free to assign or remove the addresses after binding the listening socket. No need to reconfigure the service when its listening IP range changes.

On the other hand if our service should be listening on just A.B.C.0/20 prefix, binding to all local addresses is more than we need. We might unintentionally expose an otherwise internal-only service to external traffic without a proper firewall or a socket filter in place.

Then there is the security angle. Since we now only have one socket, attacks attempting to flood any of the IPs assigned to our host on our service’s port, will hit the catch-all socket and its receive queue. While in such circumstances the Linux TCP stack has your back, UDP needs special care or legitimate traffic might drown in the flood of dropped packets.

Possibly the biggest downside, though, is that a service listening on the wildcard INADDR_ANY address claims the port number exclusively for itself. Binding over the wildcard-listening socket with a specific IP and port fails miserably due to the address already being taken (EADDRINUSE).

bind(3, {sa_family=AF_INET, sin_port=htons(12345), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
bind(4, {sa_family=AF_INET, sin_port=htons(12345), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EADDRINUSE (Address already in use)

Unless your service is UDP-only, setting the SO_REUSEADDR socket option, will not help you overcome this restriction. The only way out is to turn to SO_REUSEPORT, normally used to construct a load-balancing socket group. And that is only if you are lucky enough to run the port-conflicting services as the same user (UID). That is a story for another post.

Quick Quiz #3: Does setting the SO_REUSEADDR socket option have any effect at all when there is bind conflict? Jump to answer.

Life gets real - one port, two services

As it happens, at the Cloudflare edge we do host services that share the same port number but otherwise respond to requests on non-overlapping IP ranges. A prominent example of such port-sharing is our 1.1.1.1 recursive DNS resolver running side-by-side with the authoritative DNS service that we offer to all customers.

Sadly the sockets API doesn’t allow us to express a setup in which two services share a port and accept requests on disjoint IP ranges.

However, as Linux development history shows, any networking API limitation can be overcome by introducing a new socket option, with sixty-something options available (and counting!).

Enter SO_BINDTOPREFIX.

It's crowded in here!

Back in 2016 we proposed an extension to the Linux network stack. It allowed services to constrain a wildcard-bound socket to an IP range belonging to a network prefix.

# Service 1, 127.0.0.0/20, 1234/tcp
net1, plen1 = '127.0.0.0', 20
bindprefix1 = struct.pack('BBBBBxxx', *inet_aton(net1), plen1)

s1 = socket(AF_INET, SOCK_STREAM, 0)
s1.setsockopt(SOL_IP, IP_BINDTOPREFIX, bindprefix1)
s1.bind(('0.0.0.0', 1234))
s1.listen(1)

# Service 2, 127.0.16.0/20, 1234/tcp
net2, plen2 = '127.0.16.0', 20
bindprefix2 = struct.pack('BBBBBxxx', *inet_aton(net2), plen2)

s2 = socket(AF_INET, SOCK_STREAM, 0)
s2.setsockopt(SOL_IP, IP_BINDTOPREFIX, bindprefix2)
s2.bind(('0.0.0.0', 1234))
s2.listen(1)

This mechanism has served us well since then. Unfortunately, it didn’t get accepted upstream due to being too specific to our use-case. Having no better alternative we ended up maintaining patches in our kernel to this day.

Life gets complicated - all ports, one service

Just when we thought we had things figured out, we were faced with a new challenge. How to build a service that accepts connections on any of the 65,535 ports? The ultimate reverse proxy, if you will, code named Spectrum.

The bind syscall offers very little flexibility when it comes to mapping a socket to a port number. You can either specify the number you want or let the network stack pick an unused one for you. There is no counterpart of INADDR_ANY, a wildcard value to select all ports (INPORT_ANY?).

To achieve what we wanted, we had to turn to TPROXY, a Netfilter / iptables extension designed for intercepting remote-destined traffic on the forward path. However, we use it to steer local-destined packets, that is ones targeted to our host, to a catch-all-ports socket.

iptables -t mangle -I PREROUTING \
         -d 192.0.2.0/24 -p tcp \
         -j TPROXY --on-ip=127.0.0.1 --on-port=1234
It's crowded in here!

TPROXY-based setup comes at a price. For starters, your service needs elevated privileges to create a special catch-all socket (see the IP_TRANSPARENT socket option). Then you also have to understand and consider the subtle interactions between TPROXY and the receive path for your traffic profile, for example:

  • does connection tracking register the flows redirected with TPROXY?
  • is listening socket contention during a SYN flood when using TPROXY a concern?
  • do other parts of the network stack, like XDP programs, need to know about TPROXY redirecting packets?

These are some of the questions we needed to answer, and after running it in production for a while now, we have a good idea of what the consequences of using TPROXY are.

That said, it would not come as a shock, if tomorrow we’d discovered something new about TPROXY. Due to its complexity we’ve always considered using it to steer local-destined traffic a hack, a use-case outside of its intended application. No matter how well understood, a hack remains a hack.

Can BPF make life easier?

Despite its complex nature TPROXY shows us something important. No matter what IP or port the listening socket is bound to, with a bit of support from the network stack, we can steer any connection to it. As long the application is ready to handle this situation, things work.

Quick Quiz #4: Are there really no problems with accepting any connection on any socket? Jump to answer.

This is a really powerful concept. With a bunch of TPROXY rules, we can configure any mapping between (address, port) tuples and listening sockets.

💡 Idea #1: A local-destined connection can be accepted by any listening socket.

We didn’t tell you the whole story before. When we published SO_BINDTOPREFIX patches, they did not just get rejected. As sometimes happens by posting the wrong answer, we got the right answer to our problem

❝BPF is absolutely the way to go here, as it allows for whatever user specified tweaks, like a list of destination subnetwork, or/and a list of source network, or the date/time of the day, or port knocking without netfilter, or … you name it.❞

💡 Idea #2: How we pick a listening socket can be tweaked with BPF.

Combine the two ideas together and we arrive at an exciting concept. Let’s run BPF code to match an incoming packet with a listening socket, ignoring the address the socket is bound to. 🤯

Here’s an example to illustrate it.

It's crowded in here!

All packets coming on 1.1.1.0/24 prefix, port 53 are steered to socket sk:2, while traffic targeted at 3.3.3.3, on any port number lands in socket sk:4.

Welcome BPF inet_lookup

It's crowded in here!

To make this concept a reality we are proposing a new mechanism to program the socket lookup with BPF. What is socket lookup? It’s a stage on the receive path where the transport layer searches for a socket to dispatch the packet to. The last possible moment to steer packets before they land in the selected socket receive queue. In there we attach a new type of BPF program called inet_lookup.

It's crowded in here!

If you recall, socket lookup in the Linux TCP stack is a two phase process. First the kernel will try to find an established (connected) socket matching the packet 4-tuple. If there isn’t one, it will continue by looking for a listening socket using just the packet 2-tuple as key.

Our proposed extension allows users to program the second phase, the listening socket lookup. If present, a BPF program is allowed to choose a listening socket and terminate the lookup. Our program is also free to ignore the packet, in which case the kernel will continue to look for a listening socket as usual.

It's crowded in here!

How does this new type of BPF program operate? On input, as context, it gets handed a subset of information extracted from packet headers, including the packet 4-tuple. Based on the input the program accesses a BPF map containing references to listening sockets, and selects one to yield as the socket lookup result.

If we take a look at the corresponding BPF code, the program structure resembles a firewall rule. We have some match statements followed by an action.

It's crowded in here!

You may notice that we don’t access the BPF map with sockets directly. Instead we follow an established pattern in BPF called “map based redirection”, where a dedicated BPF helper accesses the map and carries out any steps necessary to redirect the packet.

We’ve skipped over one thing. Where does the BPF map of sockets come from? We create it ourselves and populate it with sockets. This is most easily done if your service uses systemd socket activation. systemd will let you associate more than one service unit with a socket unit, and both of the services will receive a file descriptor for the same socket. From there it’s just a matter of inserting the socket into the BPF map.

It's crowded in here!

Demo time!

This is not just a concept. We have already published a first working set of patches for the kernel together with ancillary user-space tooling to configure the socket lookup to your needs.

If you would like to see it in action, you are in luck. We’ve put together a demo that shows just how easily you can bind a network service to (i) a single port, (ii) all ports, or (iii) a network prefix. On-the-fly, without having to restart the service! There is a port scan running to prove it.

You can also bind to all-addresses-all-ports (0.0.0.0/0) because why not? Take that INADDR_ANY. All thanks to BPF superpowers.

It's crowded in here!

Summary

We have gone over how the way we bind services to network addresses on the Cloudflare edge has evolved over time. Each approach has its pros and cons, summarized below. We are currently working on a new BPF-based mechanism for binding services to addresses, which is intended to address the shortcomings of existing solutions.

bind to one address and port

👍 flood traffic on one address hits one socket, doesn’t affect the rest
👎 as many sockets as listening addresses, doesn’t scale

bind to all addresses with INADDR_ANY

👍 just one socket for all addresses, the kernel thanks you
👍 application doesn’t need to know about listening addresses
👎 flood scenario requires custom protection, at least for UDP
👎 port sharing is tricky or impossible

bind to a network prefix with SO_BINDTOPREFIX

👍 two services can share a port if their IP ranges are non-overlapping
👎 custom kernel API extension that never went upstream

bind to all port with TPROXY

👍 enables redirecting all ports to a listening socket and more
👎 meant for intercepting forwarded traffic early on the ingress path
👎 has subtle interactions with the network stack
👎 requires privileges from the application

bind to anything you want with BPF inet_lookup

👍 allows for the same flexibility as with TPROXY or SO_BINDTOPREFIX
👍 services don’t need extra capabilities, meant for local traffic only
👎 needs cooperation from services or PID 1 to build a socket map


Getting to this point has been a team effort. A special thank you to Lorenz Bauer and Marek Majkowski who have contributed in an essential way to the BPF inet_lookup implementation. The SO_BINDTOPREFIX patches were authored by Gilberto Bertin.

Fancy joining the team? Apply here!

Quiz Answers

Quiz 1

Q: How many Cloudflare services can you name?

  1. HTTP CDN (tcp/80)
  2. HTTPS CDN (tcp/443, udp/443)
  3. authoritative DNS (udp/53)
  4. recursive DNS (udp/53, 853)
  5. NTP with NTS (udp/1234)
  6. Roughtime time service (udp/2002)
  7. IPFS Gateway (tcp/443)
  8. Ethereum Gateway (tcp/443)
  9. Spectrum proxy (tcp/any, udp/any)
  10. WARP (udp)

Go back

Quiz 2

Q: Is there another way to bind a socket to all local addresses?

Yes, there is - by not bind()’ing it at all. Calling listen() on an unbound socket is equivalent to binding it to INADDR_ANY and letting the kernel pick a free port.

$ strace -e socket,bind,listen nc -l
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
listen(3, 1)                            = 0
^Z
[1]+  Stopped                 strace -e socket,bind,listen nc -l
$ ss -4tlnp
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port
LISTEN     0      1            *:42669      

Go back

Quiz 3

Q: Does setting the SO_REUSEADDR socket option have any effect at all when there is bind conflict?

Yes. If two processes are racing to bind and listen on the same TCP port, on an overlapping IP, setting SO_REUSEADDR changes which syscall will report an error (EADDRINUSE). Without SO_REUSEADDR it will always be the second bind. With SO_REUSEADDR set there is a window of opportunity for a second bind to succeed but the subsequent listen to fail.

Go back

Quiz 4

Q: Are there really no problems with accepting any connection on any socket?

If the connection is destined for an address assigned to our host, i.e. a local address, there are no problems. However, for remote-destined connections, sending return traffic from a non-local address (i.e., one not present on any interface) will not get past the Linux network stack. The IP_TRANSPARENT socket option bypasses this protection mechanism known as source address check to lift this restriction.

Go back

06:56

FreeBSD 12.1 Is Near With Libomp Finally In Base, LLD Linker By Default For i386 [Phoronix]

FreeBSD 12.1 is near with the first release candidate shipping this weekend. While a point release over the nearly one year old FreeBSD 12.0, it does come with some notable changes in tow...

05:31

Dav1d 0.5 Released With AVX2, SSSE3 & ARM64 Performance Improvements - Benchmarks [Phoronix]

Friday marked the release of dav1d 0.5 as the newest version of this speedy open-source AV1 video decoder. With dav1d 0.5 are optimizations to help out SSSE3 most prominently but also AVX2 and ARM64 processors. Here are some initial benchmarks so far of this new dav1d video decoder on Linux...

05:13

XWayland Lands RandR/Vidmode Emulation For Better Game Handling [Phoronix]

There is yet another significant improvement found for XWayland in the latest X.Org Server code that will hopefully see a long overdue release soon...

04:48

KDE Frameworks 6 Discussions Light Up With Qt 6.0 Coming Next Year [Phoronix]

With The Qt Company working hard now on development around Qt 6, the KDE developers are beginning their early discussions over their path forward to adopting this next evolutionary tool-kit update...

04:04

xf86-video-amdgpu 19.1 Delivers A Batch Of Fixes [Phoronix]

AMD has released a new version of their X.Org display driver...

Friday, 11 October

22:09

X-Plane 11.50 Flight Simulator Bringing Vulkan Support [Phoronix]

For years we have been looking forward to the realistic X-Plane flight simulator rendered by Vulkan as an alternative to their long-standing OpenGL render and with X-Plane 11.50 that is finally being made a reality...

16:51

We, Wall, we, Wall, Raku: Perl creator blesses new name for version 6 of text-wrangling lingo [The Register]

Perl 6 set to be reincarnated as Raku, as favored by Larry Wall

Perl 6 should soon be known as Raku, now that Perl creator Larry Wall has given his blessing to the name change.…

16:20

What's it like to come out as LGBTQIA+ at work? [The Cloudflare Blog]

What's it like to come out as LGBTQIA+ at work?

Today is the 31st Anniversary of National Coming Out Day. I wanted to highlight the importance of this day, share coming out resources, and publish some stories of what it's like to come out in the workplace.

About National Coming Out Day

Thirty-one years ago, on the anniversary of the National March on Washington for Lesbian and Gay Rights, we first observed National Coming Out Day as a reminder that one of our most basic tools is the power of coming out. One out of every two Americans has someone close to them who is gay or lesbian. For transgender people, that number is only one in 10.

Coming out - whether it is as lesbian, gay, bisexual, transgender or queer - STILL MATTERS. When people know someone who is LGBTQ, they are far more likely to support equality under the law. Beyond that, our stories can be powerful to each other.

Each year on October 11th, National Coming Out Day continues to promote a safe world for LGBTQ individuals to live truthfully and openly. Every person who speaks up changes more hearts and minds, and creates new advocates for equality.

For more on coming out, visit HRC's Coming Out Center.

What's it like to come out as LGBTQIA+ at work?
Source: https://www.hrc.org/resources/national-coming-out-day

Coming out stories from Proudflare

Last National Coming Out Day, I shared some stories from Proudflare members in this blog post. This year, I wanted to shift our focus to the experience and challenges of coming out in the workplace. I wanted to share what it was like for some of us to come out at Cloudflare, at our first companies, and point out some of the stresses, challenges, and risks involved.

Check out these five examples below and share your own in the comments section and/or to the people around you if you'd like!

“Coming out twice” from Lily - Cloudflare Austin

While my first experience of coming out professionally was at my previous company, I thought I’d share some of the differences between my experiences at Cloudflare and this other company.

Reflecting retrospectively, coming out was so immensely liberating. I've never been happier, but at the time I was a mess. LGBTQIA+ people still have little to no legal protection, and having been initially largely rejected by my parents and several of my friends after coming out to them, I felt like I was at sea, floating without a raft. This feeling of unease was compounded by my particular coming out being a two part series: I wasn’t only coming out as transgender, but now also as a lesbian.

Eventually, after the physical changes became too noticeable to ignore (around 7 months ago), I worked up the courage to come out at work. The company I was working for was awful in many ways; bad culture, horrible project manager, and rampant nepotism. Despite this, I was pleasantly surprised that what I told them was almost immediately accepted. Surely this was finally a win for me? However, that initial optimism didn’t last. As time went on, it became clear that saying you accept it and actually internalizing it are completely different. I started being questioned about needed medical appointments, and I wasn’t really being treated any different than before. I still have no idea if it played into the reason they fired me for “performance” despite never bringing it up before.

As I started applying for new jobs, one thing was always on my mind: will this job be different? Thankfully the answer was yes; my experience at Cloudflare has been completely different. Through the entire hiring process, I never once had to out myself. Finally when I had to come out to use my legal name on the offer letter, Cloudflare handled it with such grace. One such example was that they went so far as to put my preferred name in quotes next to my legal one on the document. These little nuggets of kindness are visible all over the company - you can tell people are accepting and genuinely care. However, the biggest difference was that Cloudflare supports and celebrates the LGBTQIA+ community but doesn’t emphasize it. If you don’t want it to be part of your identity it doesn’t have to be. Looking to the future I hope I can just be a woman that loves women, not a trans-woman that loves women, and I think Cloudflare will be supportive of that.

A story from Mark - Cloudflare London

My coming out story? It involves an awful lot of tears in a hotel room in Peru, about three and a half thousand miles away from anyone I knew.

That probably sounds more dramatic than the reality. I’d been visiting some friends in Minnesota and I was due to head to Peru to hike the Machu Picchu trail, but a missed flight connection saw me stranded in Atlanta overnight.

A couple of months earlier, I’d kind of came out to myself. This was less a case of admitting my sexuality, but more finally learning exactly what it is. I’d only just turned 40 and, months later, I was still trying to come to terms with what it all meant; reappraising your sexuality in your 40s is not a journey for the faint of heart! I hadn’t shared it with anyone yet, but while sitting in a thuddingly dull hotel room in Atlanta, it just felt like time. So I penned my coming out letter.

The next day I boarded a plane, posted my letter to Facebook, turned off my phone, and then experienced what was, without question, The. Longest. Flight. Of. My. Life. This was followed, perhaps unsurprisingly, by the longest taxi ride of my life.

Eventually, after an eternity or two had passed, I reached my hotel room, connected to the hotel wifi and read through the messages that had accumulated over the past 8 hours or so. Messages from my friends, and family, and even my Mum. The love and support I got from all of them just about broke me. I practically dissolved in a puddle of tears as I read through everything. Decades of pent up confusion and pain washed away in those tears.

I’ll never forget the sense of acceptance I felt after all that.

As for coming out at work, well, let’s see how it goes: Hi, I’m Mark, and I’m asexual.

A story from Jacob - Cloudflare San Francisco

I started my career working in consulting in a conservative environment where I was afraid that coming out would cause me to be taken less seriously by my male coworkers. I remember casually mentioning my partner at the time to a couple of close coworkers to gauge their response. They surprised me and turned out to be very accepting and insisted that I bring him to our Holiday Party later that year. That event was the first time I came out to my entire office and I remember feeling very nervous before stepping into the room.

My anxiety was soon quelled with a warm welcome from my office leadership and from then on I didn’t feel like I was dancing around the elephant in the room. After this experience being out at work is not something I think greatly about, I have been very fortunate to work in accepting environments including at Cloudflare!

A story from Malavika - Cloudflare London

Nearly a decade has passed since I first came out in a professional setting, when I first started working at a global investment bank in Manhattan. The financial services industry was, and continues to be, known for its machismo, and at the time, gay marriage was still illegal in the United States. Despite being out in my personal life, the thought of being out at work terrified me. I already felt so profoundly different from my coworkers as a woman and a person of colour, and thus I feared that my LGBTQIA+ identity would further reduce my chances of career advancement. I had no professional role models to signal that is was okay to be LGBTQIA+ in my career.

Soon after starting this job, a close friend and university classmate invited me to a dinner for LGBTQIA+ young professionals in financial services and management consulting. I had never attended an event targeted at LGBTQIA+ professionals, let alone met an out LGBTQIA+ individual working outside of the arts, academia or nonprofit sectors. Looking around the dining room, I felt as though I had spotted a unicorn: a handful of out senior leaders at top investment banks and consulting firms sat among nearly 40 ambitious young professionals, sharing their coming out stories and providing invaluable career advice. Before this event, I would have never believed that there were so many people “like me” within the industry, and most certainly not in executive positions. For the first time, I felt a strong sense of belonging, as I finally had LGBTQIA+ role models to look up to professionally, and I no longer felt afraid of being open about my sexuality professionally.

After this event, I felt inspired and energised. Over the subsequent weeks, my authentic self began to show. My confidence and enthusiasm at work dramatically increased. I was able to build trust with my colleagues more easily, and my managers lauded me for my ability to incorporate constructive feedback quickly.

As I reflect on my career trajectory, I have not succeeded in spite of my sexuality, but rather, because of being out as a bisexual woman. Over the course of my career, I have developed strong professional relationships with senior LGBTQIA+ mentors, held leadership positions in a variety of diversity networks and organisations, and attended a number of inspiring conferences and events. Without the anxiety of having to hide an important part of my identity, I am able to be the confident, intelligent woman I truly am. And that is precisely why I am actively involved in Proudflare, Cloudflare’s employee resource group for LGBTQIA+ individuals. I strongly believe that by creating an inclusive workplace - for anyone who feels different or out of place - all employees will have the support and confidence to shine in their professional and personal lives.

A story from Chase - Cloudflare San Francisco

I really discovered my sexuality in college. Growing up, there weren’t many queer people in my life. I always had a loving family that would presumably accept me for who I was, but the lack of any queer role models in my life made me think that I was straight for quite some time. I just didn’t know what being gay was.

I always had a best friend - someone that I would end up spending all my time with. This friend wouldn’t always be the same person, but inevitably I would latch on one person and focus most of my emotional energy on our friendship. In college this friend was Daniel. We met while pledging a business fraternity our freshman year and quickly became close friends. Daniel made me feel different. I thought about him when I wasn't with him, I wanted to be with him all the time, and most of all I would get jealous when he would date women. He saw right through me and eventually got me to open up about being gay. Our long emotional text conversation ended with me asking if he had anything he wanted to share with me (fingers crossed). His answer - “I don’t know why everyone assumes I’m gay, I’m not.” Heart = Broken.

Fast forward 6 months and we decide to live together our Junior year. I slowly started becoming more comfortable with my sexuality and began coming out. I started with my close friends, then my brother, then slightly less close friends, but kept getting hung up on my parents. Luckily, Daniel made that easier. That text from Daniel about not being gay ended up being not as set in stone as I thought. We started secretly dating for almost a year and I was the happiest I have ever been. The thrills of a secret relationship can only last so long and eventually we knew we needed to tell the world. We came out to our parents together, as a couple. We were there for each other for the good conversations, the tough conversations, the “Facebook Official” post, and coming out at our first corporate jobs (A never ending cycle). We were so fortunate to both work at warm, welcoming companies when we came out and continue to work at such companies today.

Coming out wasn’t easy but knowing I didn’t have to do it alone made it a whole heck of a lot easier. Happy four-year anniversary, Dan.

Resources for living openly

To find resources about living openly, visit the Human Rights Campaign’s Coming Out Center. I hope you'll be true to yourselves and always be loud and proud.

About Proudflare

To read more about Proudflare and why Cloudflare cares about inclusion in the workplace, read Proudflare’s first pride blog post.

What's it like to come out as LGBTQIA+ at work?

15:38

From Libra to leave-ya: eBay, Visa, Stripe, PayPal, others flee Facebook's crypto-coin [The Register]

Zuck-bucks dead in the water as payment giants snub currency tech

Updated  The Facebook-backed Libra crypto-currency project was dealt a crushing blow Friday when eBay, Stripe, and others yanked their support.…

15:01

Rspamd 2.0 Released For Advancing Free Software Spam Filtering [Phoronix]

Rspamd 2.0 has been released as the newest version of this leading open-source spam filtering software and it's coming with plenty of changes...

14:27

How bad is Catalina? It's almost Apple Maps bad: MacOS 10.15 pushes Cupertino's low bar for code quality lower still [The Register]

Devs lament 'trash fire' 'Windows Vista-like' release

Comment  Amid Apple's attempt to fend off criticism for its removal, restoration, and re-removal of an app used by pro-democracy protesters in Hong Kong, the company is also facing particularly voluble criticism from users of its latest desktop operating system, macOS Catalina.…

14:04

Tons Of The Intel Tiger Lake "Gen 12" Graphics Compiler Code Just Landed In Mesa 19.3 [Phoronix]

A lot of the Tiger Lake "Gen 12" graphics compiler infrastructure changes to Mesa for Intel's open-source OpenGL and Vulkan Linux drivers were just merged into the Mesa 19.3 code-base...

10:30

KDE Plasma Mobile Is Beginning To Look Surprisingly Good [Phoronix]

The KDE Plasma Mobile team has begun publishing weekly reports on their development efforts for making KDE software more suitable for mobile devices as well as convergence and other efforts in common with KDE on the desktop...

10:00

No ghosts but the Holy one as vicar exorcises spooky tour from UK's most haunted village [The Register]

Plus: Dumb hipsters spaff $3,000 on 'Jesus Shoes'

A vicar has said there's no room for ghosts in the UK's "most haunted village" of Prestbury, Gloucestershire – unless it's one of the Holy variety.…

09:07

Saturday Morning Breakfast Cereal - Scam [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
All I'm saying is Garfield never provides citations for anything.


Today's News:

09:00

Openreach's cunning plan to 'turbocharge' the post-Brexit economy: Getting everyone on full-fibre broadband by 2025 [The Register]

£59bn boost – 'if we can get right conditions to invest'

BT's pipe laying subsidiary Openreach has published a list of proposals it claims will help Britain gain full fibre by the mid-2020s.…

08:00

Experts warn UK court digitisation is moving too fast and breaking too many things [The Register]

Not that it was moving quickly to begin with

Ambitious plans to digitise Her Majesty's Courts and Tribunal Service via a £1bn modernisation programme should be slowed down even further, MPs heard this week.…

07:51

Raspberry Pi 4's V3D Mesa Driver Nearing OpenGL ES 3.1 [Phoronix]

Back during the summer Eric Anholt who had been the lead developer of Broadcom's VC4/V3D graphics driver stack most notably used by Raspberry Pi boards left the company to join Google. In his place, the Raspberry Pi Foundation is working with consulting firm Igalia to continue work on the DRM/KMS kernel driver and Gallium3D drivers for this open-source graphics driver support...

07:00

I can't believe you've done this: Cisco.com asks visitors to explain to IT why they have broken the website [The Register]

Switchzilla's online presence beset by mysterious outages

Cisco has suffered an odd series of outages that briefly KO'd its website and corporate blogs.…

07:00

Intel Compute Runtime 19.40.14409 Adds "Early Support" Tiger Lake Support [Phoronix]

As written about a few days ago, Intel engineers added Gen12/Xe Tiger Lake support to their compute stack "NEO" for Linux users. That support has now made it into their latest weekly release of the Intel Compute Runtime...

06:00

Tokyo Olympics, US tariffs Trump Europe's Brexit shakes as global PC shipments balloon to fattest figure in 7 years [The Register]

Extra $37bn levy on notebooks, slabs pushes American retailers to panic buy, buy, buy

Businesses heading for the Windows 7 escape hatch and US retailers panic-buying ahead of the next round of trade tariffs helped PC shipments rise globally in Q3 at the fastest rate in seven-and-a-half years.…

05:00

Oh dear... AI models used to flag hate speech online are, er, racist against black people [The Register]

Tweets written in African-American English slang more likely to be considered offensive

The internet is filled with trolls spewing hate speech, but machine learning algorithms can’t help us clean up the mess.…

04:48

Mir 1.5 Released With Bug Fixes & Wayland Improvements [Phoronix]

Canonical's developers continuing to advance the Mir display server that continues to be focused on providing an abstraction for Wayland support have issued a new feature release...

04:22

AMD Linux Driver Bringing BACO Support To Older Sea Islands / Volcanic Islands GPUs [Phoronix]

It's fairly rare these days seeing big patch sets out of AMD focused on improving the open-source Linux driver support for the likes of aging GPUs such as the Sea Islands and Volcanic Islands generations, but this Friday there is some notable development activity...

04:05

Intel's IWD Wireless Daemon Now Supports IPv6 Network Configuration Handling [Phoronix]

Intel's open-source IWD wireless daemon that continues work on replacing WPA Supplicant is up to version 0.22...

04:00

SAP's CEO Bill McDermott quits: Will hand over to co-captains for Next Generation reboot [The Register]

Subspace communication over, enterprise commander out

SAP's chief executive Bill McDermott will not renew his employment contract at the German database software maker.…

03:00

Not a death spiral, I'm trapped in a closed loop of customer experience [The Register]

Much, much worse than a vicious circle

Something for the Weekend, Sir?  I've got myself stuck in a ring. Yes, again. Medical assistance may be required.…

02:49

A Deep Dive Into The Performance-Focused AMDGPU "Bulk Moves" Functionality [Phoronix]

Recently on Phoronix you have likely heard a lot about the LRU "bulk moves" functionality for the AMDGPU driver after it was talked up by a Valve Linux developer for the performance help to Linux games and then the change landing in Linux 5.4 as a "fix"...

02:10

Criminalise British drone fliers, snarl MPs amid crackdown demands [The Register]

Geoblocking, weaponisation and more in Parliamentary committee's sights

The British government should make it a crime to disable geofencing and electronic conspicuity on one’s drone, according to MPs from a parliamentary committee looking at future drone regulation.…

02:00

Make your Python code look good with Black on Fedora [Fedora Magazine]

The Python programing language is often praised for its simple syntax. In fact the language recognizes that code is read much more often than it is written. Black is a tool that automatically formats your Python source code making it uniform and compliant to the PEP-8 style guide.

How to install Black on Fedora

Installing Black on Fedora is quite simple. Black is maintained in the official repositories.

$ sudo dnf install python3-black

Black is a command line tool and therefore it is run from the terminal.

$ black --help

Format your Python code with Black

Using Black to format a Python code base is straight forward.

$ black myfile.py
All done! ✨ 🍰 ✨ 1 file left unchanged.
$ black path_to_my_python_project/
All done! ✨ 🍰 ✨
165 files reformatted, 24 files left unchanged.

By default Black allows 88 characters per line, meaning that the code will be reformatted to fit within 88 characters per line. It is possible to change this to a custom value, for example :

$ black --line-length 100 my_python_file.py

This will set the line length to allow 100 characters.

Run Black as part of a CI pipeline

Black really shines when it is integrated with other tools, like a continuous integration pipeline.

The –check option allows to verify if any files need to be reformatted. This is useful to run as a CI test to ensure all your code is formatted in consistent manner.

$ black --check myfile.py
would reformat myfile.py
All done! 💥 💔 💥
1 file would be reformatted.

Integrate Black with your code editor

Running Black during the continuous integration tests is a great way to keep the code base correctly formatted. But developers really wants to forget about formatting and have the tool managing it for them.

Most of the popular code editors support Black. It allows developers to run the format tool every time a file is saved. The official documentation details the configuration needed for each editor.

Black is a must-have tool in the Python developer toolbox and is easily available on Fedora.

01:15

The safest place to save your files is somewhere nobody will ever look [The Register]

You shoved your documents where, exactly?

On Call  Friday is that special time of the week when clocks seem to slow to a crawl and software giants drop their buggiest code. It is also the time when The Register pokes a talon into the sack marked "On Call".…

00:29

"VIRTME" Revised For Virtualized Linux Kernel Testing [Phoronix]

The "VIRTME" project was started years ago as a set of simple tools for running a virtualized Linux kernel that uses the host distribution or basic root file-system rather than a complete Linux distribution image. There hasn't been a new release of VIRTME in years but that changed on Thursday...

00:00

Peer into the future at Cisco’s Networking.Next Virtual Event [The Register]

Be the first to see tech giant’s global trends report

Promo  Cisco is inviting the world’s IT leaders to join its Networking.Next Virtual Event on 24 October, offering up a panel of experts who will examine the diverse trends of today, that are shaping tomorrow’s network.…

Thursday, 10 October

23:10

Kiss my ASCII, Microsoft – we've got one million fewer daily active users than you, boasts Slack [The Register]

Redmond's bundled group chat app draws fire from Slackville

Several months after Microsoft crowed about how its Teams group chat app has reached 13 million daily active users, rival Slack has fired back with figures of its own.…

22:19

Mesa's DRM Library Looking To Change Its Versioning Scheme [Phoronix]

Mesa's DRM library could soon be shifting to a date-based versioning scheme similar to what is already employed by Mesa itself (year.release) and the X.Org Server is also looking at similar versioning...

18:00

Good Morning, Jakarta! [The Cloudflare Blog]

Good Morning, Jakarta!
Good Morning, Jakarta!

Beneath the veneer of glass and concrete, this is a city of surprises and many faces. On 3rd October 2019, we brought together a group of leaders from across a number of industries to connect in Central Jakarta, Indonesia.

The habit of sharing stories at the lunch table, exchanging ideas, and listening to ideas from the different viewpoints of people from all tiers, paying first-hand attention to all input from customers, and listening to the dreams of some of life’s warriors may sound simple but it is a source of inspiration and encouragement in helping the cyberspace community in this region.

And our new data center in Jakarta extends our Asia Pacific network to 64 cities, and our global network to 194 cities.

Selamat Pagi

Right on time, Kate Fleming extended a warm welcome to our all our Indonesia guests. "We were especially appreciative of the investment of your time that you made coming to join us."

Kate, is the Head of Customer Success for APAC. Australian-born, Kate spent the past 5 years living in Malaysia and Singapore. She leads a team of Customer Success Managers in Singapore. The Customer Success team is dispersed across multiple offices and time zones. We are the advocates for Cloudflare Enterprise customers. We help with your on-boarding journey and various post sales activities from project and resource management planning to training, configuration recommendations, sharing best practices, point of escalation and more.

Good Morning, Jakarta!

"Today, the Indonesian Cloudflare team would like to share with you some insights and best practices around how Cloudflare is not only a critical part of any organization’s cyber security planning, but is working towards building a better internet in the process.” - Kate


Ayush Verma, who is our Solutions Engineer for ASEAN and India, was there to unveil the latest cyber security trends. He shared insights on how to stay ahead of the game in the fast-charging online environment.

Get answers to questions like:
How can I secure my site without sacrificing performance?
What are the latest trends in malicious attacks — and how should I prepare?

Good Morning, Jakarta!
Good Morning, Jakarta!

Superheroes Behind The Scenes

We were very honored to have two industry leaders speak to us.

Jullian Gafar, the CTO from PT Viva Media Baru.
PT Viva Media Baru is an online media company based out of Jakarta, Indonesia.

Firman Gautama, the VP of Infrastructure & Security from PT. Global Tiket Network.
PT. Global Tiket Network offer hotel, flight, car rental, train, world class event/concert and attraction tickets.

It was a golden opportunity to hear from the leaders themselves about what’s keeping them busy lately, their own approaches to cyber security, best practices, and easy-to-implement and cost-efficient strategies.  

Fireside Chat Highlights:  Shoutout from Pak Firman, who was very pleased with the support he received from Kartika. He said "most sales people are hard to reach after completing a sale. Kartika always goes the extra mile, she stays engaged with me. The Customer Experience is just exceptional.”

Good Morning, Jakarta!

Our Mission Continues

Thank you for giving us your time to connect. It brings us back to our roots and core mission of helping to build a better internet. Based on this principle “The Result Never Betrays the Effort’ we believe that what we are striving for today, by creating various innovations in our services and strategies to improve your business, will in time produce the best results. For this reason, we offer our endless thanks for your support and loyalty in continuing to push forward with us. Always at your service!

Good Morning, Jakarta!
Cloudflare Event Crew in Indonesia #CloudflareJKT
Chris Chua (Organiser) | Kate Fleming | Bentara Frans | Ayush Verma | Welly Tandiono | Kartika Mulyo  | Riyan Baharudin

17:32

Microsoft, GitHub staff tell Satya Nadella: It's time to ice ICE, baby. Rip up those tech contracts [The Register]

Turmoil in Redmond over deals with US immigration agents

Microsoft and its GitHub subsidiary are under fire from some of their own employees over service contracts with America's controversial Immigration and Customs Enforcement (ICE) agency.…

15:23

In a touching show of solidarity with the NBA and Blizzard, Apple completely caves to China on HK protest app [The Register]

That's the way the Cook, he crumbles: HKmap banned again

Apple has once again taken down an iOS app aimed at helping Hong Kong protesters avoid police crackdowns in the troubled city.…

14:34

Stalker attacks Japanese pop singer – after tracking her down using reflection in her eyes [The Register]

'If only you could see what I've seen through your eyes'...

A Japanese man indicted on Tuesday for allegedly attacking a 21-year-old woman last month appears to have found where his victim lived by analyzing geographic details in an eye reflection captured in one of her social media photos.…

13:37

Don't be so Maduro: Adobe backs down (a little) on Venezuela sanctions blockade [The Register]

Media giant says it can now pay back subscription fees

Adobe has reversed course on its decision to withhold refund payments from customers in Venezuela.…

11:36

Intel SVT-VP9 Finally Makes Its First Pre-Release For Speedy VP9 Encoding [Phoronix]

While Intel's SVT-VP9 video encode has been public since February and receiving frequent Git commits for advancing this very fast open-source VP9 video encoder, finally today it saw its first tagged release, being called the SVT-VP9 0.1 pre-release...

11:00

Finfisher malware authors fire off legal threats to silence German journos [The Register]

Haben sie nicht von dem Streisand-Effekt gehört?

Malware authors behind the Finfisher spyware suite, well beloved by dictators, have sent legal threats intended to silence a German news blog that reported them to criminal prosecutors over allegedly illegal malware exports.…

10:00

Creators Update meets its maker: It's 1903 or bust for those clinging to Windows 10 1703 [The Register]

1803 to be euthanised in November

Two faithful Windows 10 versions are to be led out behind the barn by a sad-faced Microsoft engineer.…

09:00

Some fokken arse has bared the privates of 250,000 users' from Dutch brothel forum [The Register]

'Hookers.nl is committed to privacy and we deeply regret the situation.' Ja, hoor!

A Dutch vBulletin forum for sex workers and their clients has reportedly been hacked using that infamous RCE vuln, baring the privates (and data) of a quarter of a million people.…

08:15

Just let us have Huawei and get on with 5G, UK mobe networks tell MPs [The Register]

Another Parliamentary enquiry? Huawei, the Brexit of network policy decisions

British telcos and academics have told a Parliamentary enquiry the UK needs to get on with allowing Huawei equipment into the heart of its future 5G networks.…

08:10

System76 Launches Two Intel Laptops With "Open-Source Firmware" Coreboot [Phoronix]

While not exactly a big surprise with System76 having done an "OSFC Edition" Coreboot laptop at small scale at the end of the summer, but System76 is now formally announcing two Linux laptops shipping with Coreboot as an alternative to their proprietary BIOS...

07:30

See you in Hull: First UK city to be hooked up to full-fibre broadband [The Register]

200,000 homes and biz have gigabit-capable connections

Step forward, Hull: first city in Blighty to claim the title of full-fibre connectivity.…

07:15

Saturday Morning Breakfast Cereal - Probe [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
The really awkward situation is when you catch your partner sending an exploratory probe to see other people.


Today's News:

No, really, SMBC is just space jokes 24/7 now.

07:01

Windows 10 vs. Ubuntu 19.10 vs. Clear Linux vs. Debian 10.1 Benchmarks On An Intel Core i9 [Phoronix]

Earlier this week I provided some fresh Windows vs. Linux web browser benchmarks for both Firefox and Chrome. For those curious how the current Windows 10 vs. Linux performance is for other workloads, here is a fresh look across a variety of software applications and while testing the near-final Ubuntu 19.10, Intel's rolling-release Clear Linux, and Debian 10.1 while running off an Intel Core i9 HEDT platform.

06:45

ESA bigwigs: Euro Moon efforts are going the way they 'should' – which is to say not by 2024 [The Register]

Top brass on keeping ISS lights on and life after Brexit

ESTEC  The European Space Research and Technology Centre (ESTEC) in Noordwijk, Netherlands, opened its doors to the public last weekend, and The Register braved the rain to grill top brass on spaceships, partnerships and the "B" word.…

06:01

Ditch Chef, Puppet, Splunk and snyk for GitLab? That's the pitch from your new wannabe one-stop DevOps shop [The Register]

'Hyper-aggressive' company offers workflow portability for multiple clouds

"We want GitLab monitoring to be a complete replacement for DataDog," GitLab's director of product, Eric Brinkman, said yesterday. And he didn't stop there, referring to a whole swathe of "tools that GitLab can replace" at the firm's Commit event in London.…

05:16

Is right! Ofcom says Scousers enjoy a natter on the phone compared to southern blerts [The Register]

Especially to Boris Johnson

Liverpool is the most gobby verbal region in the UK, according to Ofcom – something prime minister Boris Johnson would no doubt have confirmed had he visited the city today.*…

04:48

Valve's Radeon "ACO" Vulkan Compiler Back-End Now Supports Navi [Phoronix]

The promising ACO compiler back-end for the Radeon "RADV" Vulkan driver now has support for GFX10/Navi graphics!..

04:30

2001 fiction set to be science fact? NASA boffin mulls artificial intelligence to watch over the lunar Gateway [The Register]

Daisy, Daisy, give me your answer do

If humans are to go beyond the Moon, they must rely less on ground control and more on AI systems to perform operations such as flying, and conduct scientific experiments more autonomously, according to a NASA paper [PDF] out this week.…

04:21

AMD Sends Out HDCP Support, New GPU Support In AMDKFD For Linux 5.5 [Phoronix]

In addition to Intel this week sending out their first big batch of graphics driver changes for the Linux 5.5 kernel cycle kicking off at year's end, today AMD developers sent in their first batch of AMDGPU/AMDKFD kernel driver changes targeting this next version of the Linux kernel...

03:46

Puppet to start pulling a few strings in the cloud-native world with Project Nebula [The Register]

Public beta for new shiny, plus many Tasks make a Plan in upcoming Enterprise 2019.2

Puppetize PDX  DevOps darling Puppet took to the stage at the company's Portland Puppetize PDX shindig yesterday to whip the covers off Project Nebula, before giving us a sneak peek at an updated Puppet Forge and a preview of Puppet Enterprise 2019.2.…

03:01

Former BAE Systems contractor charged with 'damaging disclosure' of UK defence secrets [The Register]

49-year-old to appear at the Old Bailey next month

A former BAE Systems defence contractor has appeared in court accused of leaking "highly sensitive" secrets to foreign governments.…

02:00

Watch online today: How to leverage data to disrupt rivals – and overcome challenges [The Register]

Join us with Google Cloud for advice to the brave

Webcast  If your strategy depends on using data to disrupt the market, then unstoppable data growth, a change of business strategy, or a fast-moving competitive landscape, are likely to present challenges.…

02:00

X.Org Server To See New CI-Driven Automated Release Cycles, Big Version Numbers [Phoronix]

There hasn't been a major release of the X.Org Server now in 17 months... Not because there haven't been any changes (in fact, a lot of GLAMOR and XWayland work among other fixing) but because no one has stepped up as release manager to get the next version out the door. But to workaround that, developers are looking at moving the X.Org Server to purely time-based releases and letting their continuous integration testing be the deciding factor on if a release is ready to ship...

01:03

Europe publishes 5G risk assessment; America scrawls ‘Huawei’ on the side of a nuke and goes for a ride [The Register]

There’s nothing like reasoned policy debate. This is nothing like reasoned policy debate

The European Union has published a risk assessment of next-generation 5G mobile networks and concluded that everyone needs to think differently about security, given fundamental changes in how the new networks will operate.…

00:01

American intelligence follows British lead in warning of serious VPN vulnerabilities [The Register]

Now if only they'd accept the Queen back again...

The US National Security Agency (NSA) is warning admins to patch a set of months-old security bugs that have recently come under active attack.…

Wednesday, 09 October

23:01

iTerm2 issues emergency update after MOSS finds a fatal flaw in its terminal code [The Register]

It's time to update or call 0118 999 88199 9119 7253

The author of popular macOS open source terminal emulator iTerm2 has rushed out a new version (v3.3.6) because prior iterations have a security flaw that could allow an attacker to execute commands on a computer using the application.…

17:47

US charges Singapore coin miner with conning cloud firms out of compute time [The Register]

Man alleged to have faked identity as game developer

A man from Singapore has been indicted in the US for impersonating a game developer in order to steal time on cloud compute systems and mine cryptocurrency.…

14:52

China and Russia join to battle 'illegal internet content,' which means what you fear it does [The Register]

Authoritarian regimes continue wrestling internet back into box

China and Russia will sign a joint treaty aimed to tackling “illegal internet content” later this month, the Russian telecoms regulator has announced.…

14:23

That lithium-ion battery in your phone or car? It has just won three chemists the Nobel Prize [The Register]

Goodenough for Goodenough as boffin is still working at 97

The Nobel Prize in Chemistry has been awarded to three pioneers in the field of lithium ion batteries, which form the power storage unit of most modern technology.…

13:26

Father of Unix Ken Thompson checkmated: Old eight-char password is finally cracked [The Register]

Aussie bod's AMD GPU smashes hash in just four days

Back in 2014, developer Leah Neukirchen found an /etc/passwd file among a file dump from the BSD 3 source tree that included the passwords used by various computer science pioneers, including Dennis Ritchie, Ken Thompson, Brian Kernighan, Steve Bourne, and Bill Joy.…

11:00

Mission Extension Vehicle-1 launches to save space from zombie satellites [The Register]

Whew, you're a bit of a rust bucket, aren't you?! Come with us

International Launch Services (ILS) sent up a Proton rocket from the Baikonur Cosmodrome this morning with a payload containing the first commercial spacecraft designed to service and extend the life of satellites in orbit.…

09:51

Forget Brexit, ignore Trump, write off today: BT's gonna make us all 'realise the potential of tomorrow' [The Register]

Non-Indian call centres and High Street shops on the way

These truly are strange times. BT is plotting a return to the High Street, unleashing hundreds of tech troubleshooters onto the unsuspecting public - and onshoring all of its call centres to Britain quicker than scheduled.…

09:00

Nutanix lures cloudy bingers with Danish trilogy: HPE GreenLake deal, ServiceNow tie-up and ProLiant DX pact [The Register]

Yep, storage firm's software pre-installed on HPE servers

Hyperconverged playa Nutanix opened its .NEXT conference in Copenhagen with a triple announcement: an HPE GreenLake deal, its software pre-installed on HPE servers, and integration with ServiceNow for automated incident-handling.…

09:00

Terraforming Cloudflare: in quest of the optimal setup [The Cloudflare Blog]

Terraforming Cloudflare: in quest of the optimal setup

This is a guest post by Dimitris Koutsourelis and Alexis Dimitriadis, working for the Security Team at Workable, a company that makes software to help companies find and hire great people.

Terraforming Cloudflare: in quest of the optimal setup

Overview

This post is about our introductive journey to the infrastructure-as-code practice; managing Cloudflare configuration in a declarative and version-controlled way. We'd like to share the experience we've gained during this process; our pain points, limitations we faced, different approaches we took and provide parts of our solution and experimentations.

Terraform world

Terraform is a great tool that fulfills our requirements, and fortunately, Cloudflare maintains its own provider that allows us to manage its service configuration hasslefree.

On top of that, Terragrunt, is a thin wrapper that provides extra commands and functionality for keeping Terraform configurations DRY, and managing remote state.

The combination of both leads to a more modular and re-usable structure for Cloudflare resources (configuration), by utilizing terraform and terragrunt modules.

We've chosen to use the latest version of both tools (Terraform-v0.12 & Terragrunt-v0.19 respectively) and constantly upgrade to take advantage of the valuable new features and functionality, which at this point in time, remove important limitations.

Workable context

Our set up includes multiple domains that are grouped in two distinct Cloudflare organisations: production & staging. Our environments have their own purposes and technical requirements (i.e.: QA, development, sandbox and production) which translates to slightly different sets of Cloudflare zone configuration.

Our approach

Our main goal was to have a modular set up with the ability to manage any configuration for any zone, while keeping code repetition to a minimum. This is more complex than it sounds; we have repeatedly changed our Terraform folder structure - and other technical aspects - during the development period. The following sections illustrate a set of alternatives through our path, along with pros & cons.

Structure

Terraform configuration is based on the project's directory structure, so this is the place to start.

Instead of retaining the Cloudflare organisation structure (production & staging as root level directories containing the zones that belong in each organization), our decision was to group zones that share common configuration under the same directory. This helps keep the code dry and the set up consistent and readable.

On the down side, this structure adds an extra layer of complexity, as two different sets of credentials need to be handled conditionally and two state files (at the environments/ root level) must be managed and isolated using workspaces.

On top of that, we used Terraform modules, to keep sets of common configuration across zone groups into a single place.
Terraform modules repository

modules/
│    ├── firewall/
│        ├── main.tf
│        ├── variables.tf
│    ├── zone_settings/
│        ├── main.tf
│        ├── variables.tf
│    └── [...]  
└──

Terragrunt modules repository

environments/
│    ├── [...]
│    ├── dev/
│    ├── qa/
│    ├── demo/
│        ├── zone-8/ (production)
│            └── terragrunt.hcl
│        ├── zone-9/ (staging)
│            └── terragrunt.hcl
│        ├── config.tfvars
│        ├── main.tf
│        └── variables.tf
│    ├── config.tfvars
│    ├── secrets.tfvars
│    ├── main.tf
│    ├── variables.tf
│    └── terragrunt.hcl
└──

The Terragrunt modules tree gives flexibility, since we are able to apply configuration on a zone, group zone, or organisation level (which is inline with Cloudflare configuration capabilities - i.e.: custom error pages can also be configured on the organisation level).

Resource types

We decided to implement Terraform resources in different ways, to cover our requirements more efficiently.

1. Static resource

The first thought that came to mind was having one, or multiple .tf files implementing all the resources with hardcoded values assigned to each attribute. It's simple and straightforward, but can have a high maintenance cost if it leads to code copy/paste between environments.

So, common settings seem to be a good use case; we chose to implement access_rules Terraform resources accordingly:
modules/access_rules/main.tf

resource "cloudflare_access_rule" "no_17" {
  notes   = "this is a description"
  mode    = "blacklist"
  configuration = {
    target  = "ip"
    value   = "x.x.x.x"
  }
}
[...]
2. Parametrized resources

Our next step was to add variables to gain flexibility. This is useful when few attributes of a shared resource configuration differ between multiple zones. Most of the configuration remains the same (as described above) and the variable instantiation is added in the Terraform module, while their values are fed through the Terragrunt module, as input variables, or entries inside_.tfvars_ files. The zone_settings_override resource was implemented accordingly:

modules/zone_settings/main.tf

resource "cloudflare_zone_settings_override" "zone_settings" {
  zone_id = var.zone_id
  settings {
    always_online       = "on"
    always_use_https    = "on"
    [...]
    browser_check       = var.browser_check
    mobile_redirect {
      mobile_subdomain  = var.mobile_redirect_subdomain
      status            = var.mobile_redirect_status
      strip_uri         = var.mobile_redirect_uri
    }
    
    [...]
    waf                 = "on"
    webp                = "off"
    websockets          = "on"
  }
}

environments/qa/main.tf

module "zone_settings" {
  source        = "git@github.com:foo/modules/zone_settings"
  zone_name     = var.zone_name
  browser_check = var.zone_settings_browser_check
  [...]
}

environments/qa/config.tfvars

#zone settings
zone_settings_browser_check = "off"
[...]
}
3. Dynamic resource

At that point, we thought that a more interesting approach would be to create generic resource templates to manage all instances of a given resource in one place. A template is implemented as a Terraform module and creates each resource dynamically, based on its input: data fed through the Terragrunt modules (/environments in our case), or entries in the tfvars files.

We chose to implement the account_member resource this way.
modules/account_members/variables.tf

variable "users" {
  description   = "map of users - roles"
  type          = map(list(string))
}
variable "member_roles" {
  description   = "account role ids"
  type          = map(string)
}

modules/account_members/main.tf

resource "cloudflare_account_member" "account_member" {
 for_each          = var.users
 email_address     = each.key
 role_ids          = [for role in each.value : lookup(var.member_roles, role)]
 lifecycle {
   prevent_destroy = true
 }
}

We feed the template with a list of users (list of maps). Each member is assigned a number of roles. To make code more readable, we mapped users to role names instead of role ids:
environments/config.tfvars

member_roles = {
  admin       = "000013091sds0193jdskd01d1dsdjhsd1"
  admin_ro    = "0000ds81hd131bdsjd813hh173hds8adh"
  analytics   = "0000hdsa8137djahd81y37318hshdsjhd"
  [...]
  super_admin = "00001534sd1a2123781j5gj18gj511321"
}
users = {
  "user1@workable.com"  = ["super_admin"]
  "user2@workable.com"  = ["analytics", "audit_logs", "cache_purge", "cf_workers"]
  "user3@workable.com"  = ["cf_stream"]
  [...]
  "robot1@workable.com" = ["cf_stream"]
}

Another interesting case we dealt with was the rate_limit resource; the variable declaration (list of objects) & implementation goes as follows:
modules/rate_limit/variables.tf

variable "rate_limits" {
  description   = "list of rate limits"
  default       = []
 
  type          = list(object(
  {
    disabled    = bool,
    threshold   = number,
    description = string,
    period      = number,
    
    match       = object({
      request   = object({
        url_pattern     = map(string),
        schemes         = list(string),
        methods         = list(string)
      }),
      response          = object({
        statuses        = list(number),
        origin_traffic  = bool
      })
    }),
    action      = object({
      mode      = string,
      timeout   = number
    })
  }))
}

modules/rate_limit/main.tf

locals {
 […]
}
data "cloudflare_zones" "zone" {
  filter {
    name    = var.zone_name
    status  = "active"
    paused  = false
  }
}
resource "cloudflare_rate_limit" "rate_limit" {
  count         = length(var.rate_limits)
  zone_id       =  lookup(data.cloudflare_zones.zone.zones[0], "id")
  disabled      = var.rate_limits[count.index].disabled
  threshold     = var.rate_limits[count.index].threshold
  description   = var.rate_limits[count.index].description
  period        = var.rate_limits[count.index].period
  
  match {
    request {
      url_pattern     = local.url_patterns[count.index]
      schemes         = var.rate_limits[count.index].match.request.schemes
      methods         = var.rate_limits[count.index].match.request.methods
    }
    response {
      statuses        = var.rate_limits[count.index].match.response.statuses
      origin_traffic  = var.rate_limits[count.index].match.response.origin_traffic
    }
  }
  action {
    mode        = var.rate_limits[count.index].action.mode
    timeout     = var.rate_limits[count.index].action.timeout
  }
}

environments/qa/rate_limit.tfvars

common_rate_limits = [
{
    #1
    disabled      = false
    threshold     = 50
    description   = "sample description"
    period        = 60
   
   match  = {
      request   = {
        url_pattern  = {
          "subdomain"   = "foo"
          "path"        = "/api/v1/bar"
        }
        schemes         = [ "_ALL_", ]
        methods         = [ "GET", "POST", ]
      }
      response  = {
        statuses        = []
        origin_traffic  = true
      }
    }
    action  = {
      mode      = "simulate"
      timeout   = 3600
    }
  },
  [...]
  }
]

The biggest advantage of this approach is that all common rate_limit rules are in one place and each environment can include its own rules in their .tfvars. The combination of those using Terraform built-in concat() function, achieves a 2-layer join of the two lists (common|unique rules). So we wanted to give it a try:

locals {
  rate_limits  = concat(var.common_rate_limits, var.unique_rate_limits)
}

There is however a drawback: .tfvars files can only contain static values. So, since all url attributes - that include the zone name itself - have to be set explicitly in the data of each environment, it means that every time a change is needed to a url, this value has to be copied across all environments and change the zone name to match the environment.

The solution we came up with, in order to make the zone name dynamic, was to split the url attribute into 3 parts: subdomain, domain and path. This is effective for the .tfvars, but the added complexity to handle the new variables is non negligible. The corresponding code illustrates the issue:
modules/rate_limit/main.tf

locals {
  rate_limits   = concat(var.common_rate_limits, var.unique_rate_limits)
  url_patterns  = [for rate_limit in local.rate_limits:  "${lookup(rate_limit.match.request.url_pattern, "subdomain", null) != null ? "${lookup(rate_limit.match.request.url_pattern, "subdomain")}." : ""}"${lookup(rate_limit.match.request.url_pattern, "domain", null) != null ? "${lookup(rate_limit.match.request.url_pattern, "domain")}" : ${var.zone_name}}${lookup(rate_limit.match.request.url_pattern, "path", null) != null ? lookup(rate_limit.match.request.url_pattern, "path") : ""}"]
}

Readability vs functionality: although flexibility is increased and code duplication is reduced, the url transformations have an impact on code's readability and ease of debugging (it took us several minutes to spot a typo). You can imagine this is even worse if you attempt to implement a more complex resource (such as page_rule which is a list of maps with four url attributes).

The underlying issue here is that at the point we were implementing our resources, we had to choose maps over objects due to their capability to omit attributes, using the lookup() function (by setting default values). This is a requirement for certain resources such as page_rules: only certain attributes need to be defined (and others ignored).

In the end, the context will determine if more complex resources can be implemented with dynamic resources.

4. Sequential resources

Cloudflare page rule resource has a specific peculiarity that differentiates it from other types of resources: the priority attribute.
When a page rule is applied, it gets a unique id and priority number which corresponds to the order it has been submitted. Although Cloudflare API and terraform provider give the ability to explicitly specify the priority, there is a catch.

Terraform doesn't respect the order of resources inside a .tf file (even in a _for each loop!); each resource is randomly picked up and then applied to the provider. So, if page_rule priority is important - as in our case - the submission order counts. The solution is to lock the sequence in which the resources are created through the depends_on meta-attribute:

resource "cloudflare_page_rule" "no_3" {
  depends_on  = [cloudflare_page_rule.no_2]
  zone_id     = lookup(data.cloudflare_zones.zone.zones[0], "id")
  target      = "www.${var.zone_name}/foo"
  status      = "active"
  priority    = 3
  actions {
    forwarding_url {
      status_code    = 301
      url            = "https://www.${var.zone_name}"
    }
  }
}
resource "cloudflare_page_rule" "no_2" {
  depends_on  = [cloudflare_page_rule.no_1]
  zone_id     = lookup(data.cloudflare_zones.zone.zones[0], "id")
  target      = "www.${var.zone_name}/lala*"
  status      = "active"
  priority    = 24
  actions {
    ssl                     = "flexible"
    cache_level             = "simplified"
    resolve_override        = "bar.${var.zone_name}"
    host_header_override    = "new.domain.com"
  }
}
resource "cloudflare_page_rule" "page_rule_1" {
  zone_id   = lookup(data.cloudflare_zones.zone.zones[0], "id")
  target    = "*.${var.zone_name}/foo/*"
  status    = "active"
  priority  = 1
  actions {
    forwarding_url {
      status_code     = 301
      url             = "https://foo.${var.zone_name}/$1/$2"
    }
  }
}

So we had to go with to a more static resource configuration because the depends_on attribute only takes static values (not dynamically calculated ones during the runtime).

Conclusion

After changing our minds several times along the way on Terraform structure and other technical details, we believe that there isn't a single best solution. It all comes down to the requirements and keeping a balance between complexity and simplicity. In our case, a mixed approach is good middle ground.

Terraform is evolving quickly, but at this point it lacks some common coding capabilities. So over engineering can be a catch (which we fell-in too many times). Keep it simple and as DRY as possible. :)

08:12

Second MoD Airbus Zephyr spy drone crashes on Aussie test flight [The Register]

Delicate thing doesn't like turbulence, apparently

A second Airbus Zephyr high altitude pseudo-satellite (HAPS) drone, built for the UK's Ministry of Defence, has crashed in Australia while on a test flight.…

07:30

Scrambling for cloud relevance, Oracle hires... 2,000? Yes, that sounds like a nice round number [The Register]

Let's all pretend that we don't remember the layoffs in March

What Larry snatcheth away with one hand, he giveth with another. Oracle is hiring a couple of thousand infrastructure services sales bods and has promised to swing open its data centre doors to more cloud regions.…

06:47

You rang? Windows 10 gets ever cosier with Android, unleashes Calls on Insiders [The Register]

Plus: New build brings Cortana resizing to all. Hurrah!

Microsoft squeezed out a fresh build of Windows 10 last night, and finally released the much-anticipated Calls feature to eager Windows Insiders.…

06:00

Aria Technology takes £750k VAT fraud case to Court of Appeal [The Register]

Senior beak overrules Upper Tribunal judges' refusal

Aria Technology Ltd, the company that used to run e-tailer Aria PC, is headed for the Court of Appeal in a third attempt to overturn a £750,000 VAT fraud ruling.…

05:27

SUSE tosses OpenStack Cloud to double down on application delivery [The Register]

That means Kubernetes and DevOps for the Linux veteran

Linux veteran SUSE has decided to kill off development of its OpenStack Cloud product line and cease sales to focus on its investments in application delivery.…

05:23

Saturday Morning Breakfast Cereal - Git [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
The Bible never refers to the humans trying to return, so I can only assume He sprayed some effective pesticides.


Today's News:

04:46

'We go back to the Moon to stay': Apollo vets not too chuffed with NASA's new rush to the regolith [The Register]

Walt Cunningham and Rusty Schweickart take a moment to chat with 'the enemy'

ESTEC  Apollo astronauts Walt Cunningham and Rusty Schweickart spoke at ESA's 2019 ESTEC shindig in the Dutch beach town of Noordwijk over the weekend, and The Register was fortunate enough to chat with the pair.…

04:00

Through the winds of winter, Microsoft sees a dream of spring... Azure Spring Cloud, that is [The Register]

Buddy Pivotal will operate managed framework on Azure Kubernetes Service

Microsoft and Pivotal have used the latter's SpringOne shindig in Austin, Texas, to show off Azure Spring Cloud, which is now available in private preview.…

03:01

TalkTalk bollocked after fibre marketing emails found to be full of sh!t [The Register]

100% capacity doesn't mean 2/3 of the true figure, growls ad watchdog

TalkTalk has been slapped down by the Advertising Standards Authority (ASA) after emailing lies to its customers as part of a hard-sell tactic for expensive fibre broadband.…

02:06

HP to hike upfront price of printer hardware as ink biz growth runs dry [The Register]

Incoming unit prez revokes licence to print money

HP is overturning a print sales model that helped it amass billions in profits over the decades but is now challenged by rival supplies makers luring customers with cheaper ink and toner cartridges.…

02:00

Command line quick tips: Locate and process files with find and xargs [Fedora Magazine]

find is one of the more powerful and flexible command-line programs in the daily toolbox. It does what the name suggests: it finds files and directories that match the conditions you specify. And with arguments like -exec or -delete, you can have find take action on what it… finds.

In this installment of the Command Line Quick Tips series, you’ll get an introduction to the find command and learn how to use it to process files with built-in commands or the xargs command.

Finding files

At a minimum, find takes a path to find things in. For example, this command will find (and print) every file on the system:

find /

And since everything is a file, you will get a lot of output to sort through. This probably doesn’t help you locate what you’re looking for. You can change the path argument to narrow things down a bit, but it’s still not really any more helpful than using the ls command. So you need to think about what you’re trying to locate.

Perhaps you want to find all the JPEG files in your home directory. The -name argument allows you to restrict your results to files that match the given pattern.

find ~ -name '*jpg'

But wait! What if some of them have an uppercase extension? -iname is like -name, but it is case-insensitive:

find ~ -iname '*jpg'

Great! But the 8.3 name scheme is so 1985. Some of the pictures might have a .jpeg extension. Fortunately, we can combine patterns with an “or,” represented by -o. The parentheses are escaped so that the shell doesn’t try to interpret them instead of the find command.

find ~ \( -iname '*jpeg' -o -iname '*jpg' \)

We’re getting closer. But what if you have some directories that end in jpg? (Why you named a directory bucketofjpg instead of pictures is beyond me.) We can modify our command with the -type argument to look only for files:

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f

Or maybe you’d like to find those oddly named directories so you can rename them later:

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d

It turns out you’ve been taking a lot of pictures lately, so narrow this down to files that have changed in the last week with -mtime (modification time). The -7 means all files modified in 7 days or fewer.

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7

Taking action with xargs

The xargs command takes arguments from the standard input stream and executes a command based on them. Sticking with the example in the previous section, let’s say you want to copy all of the JPEG files in your home directory that have been modified in the last week to a thumb drive that you’ll attach to a digital photo display. Assume you already have the thumb drive mounted as /media/photo_display.

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7 -print0 | xargs -0 cp -t /media/photo_display

The find command is slightly modified from the previous version. The -print0 command makes a subtle change on how the output is written: instead of using a newline, it adds a null character. The -0 (zero) option to xargs adjusts the parsing to expect this. This is important because otherwise actions on file names that contain spaces, quotes, or other special characters may not work as expected. You should use these options whenever you’re taking action on files.

The -t argument to cp is important because cp normally expects the destination to come last. You can do this without xargs using find‘s -exec command, but the xargs method will be faster, especially with a large number of files, because it will run as a single invocation of cp.

Find out more

This post only scratches the surface of what find can do. find supports testing based on permissions, ownership, access time, and much more. It can even compare the files in the search path to other files. Combining tests with Boolean logic can give you incredible flexibility to find exactly the files you’re looking for. With build in commands or piping to xargs, you can quickly process a large set of files.

Portions of this article were previously published on Opensource.com. Photo by Warren Wong on Unsplash.

Tuesday, 08 October

18:00

08:13

Saturday Morning Breakfast Cereal - Multiplanetary [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
'When I looked down from orbit, I saw that Earth was so fragile, and I knew then that we could totally kick its ass.'


Today's News:

SMBC is slowly becoming 100% space jokes.

03:00

Talk Transcript: How Cloudflare Thinks About Security [The Cloudflare Blog]

Talk Transcript: How Cloudflare Thinks About Security
Image courtesy of Unbabel
Talk Transcript: How Cloudflare Thinks About Security

This is the text I used for a talk at artificial intelligence powered translation platform, Unbabel, in Lisbon on September 25, 2019.

Bom dia. Eu sou John Graham-Cumming o CTO do Cloudflare. E agora eu vou falar em inglês.

Thanks for inviting me to talk about Cloudflare and how we think about security. I’m about to move to Portugal permanently so I hope I’ll be able to do this talk in Portuguese in a few months.

I know that most of you don’t have English as a first language so I’m going to speak a little more deliberately than usual. And I’ll make the text of this talk available for you to read.

But there are no slides today.

I’m going to talk about how Cloudflare thinks about internal security, how we protect ourselves and how we secure our day to day work. This isn’t a talk about Cloudflare’s products.

Culture

Let’s begin with culture.

Many companies have culture statements. I think almost 100% of these are pure nonsense. Culture is how you act every day, not words written in the wall.

One significant piece of company culture is the internal Security Incident mailing list which anyone in the company can send a message to. And they do! So far this month there have been 55 separate emails to that list reporting a security problem.

These mails come from all over the company, from every department. Two to three per day. And each mail is investigated by the internal security team. Each mail is assigned a Security Incident issue in our internal Atlassian Jira instance.

People send: reports that their laptop or phone has been stolen (their credentials get immediately invalidated), suspicions about a weird email that they’ve received (it might be phishing or malware in an attachment), a concern about physical security (for example, someone wanders into the office and starts asking odd questions), that they clicked on a bad link, that they lost their access card, and, occasionally, a security concern about our product.

Things like stolen or lost laptops and phones happen way more often than you’d imagine. We seem to lose about two per month. For that reason and many others we use full disk encryption on devices, complex passwords and two factor auth on every service employees need to access. And we discourage anyone storing anything on their laptop and ask them to primarily use cloud apps for work. Plus we centrally manage machines and can remote wipe.

We have a 100% blame free culture. You clicked on a weird link? We’ll help you. Lost your phone? We’ll help you. Think you might have been phished? We’ll help you.

This has led to a culture of reporting problems, however minor, when they occur. It’s our first line of internal defense.

Just this month I clicked on a link that sent my web browser crazy hopping through redirects until I ended up at a bad place. I reported that to the mailing list.

I’ve never worked anywhere with such a strong culture of reporting security problems big and small.

Hackers

We also use HackerOne to let people report security problems from the outside. This month we’ve received 14 reports of security problems. To be honest, most of what we receive through HackerOne is very low priority. People run automated scanning tools and report the smallest of configuration problems, or, quite often, things that they don’t understand but that look like security problems to them. But we triage and handle them all.

And people do on occasion report things that we need to fix.

We also have a private paid bug bounty program where we work with a group of individual hackers (around 150 right now) who get paid for the vulnerabilities that they’ve found.

We’ve found that this combination of a public responsible disclosure program and then a private paid program is working well. We invite the best hackers who come in through the public program to work with us closely in the private program.

Identity

So, that’s all about people, internal and external, reporting problems, vulnerabilities, or attacks. A very short step from that is knowing who the people are.

And that’s where identity and authentication become critical. In fact, as an industry trend identity management and authentication are one of the biggest areas of spending by CSOs and CISOs. And Cloudflare is no different.

OK, well it is different, instead of spending a lot of identity and authentication we’ve built our own solutions.

We did not always have good identity practices. In fact, for many years our systems had different logins and passwords and it was a complete mess. When a new employee started accounts had to be made on Google for email and calendar, on Atlassian for Jira and Wiki, on the VPN, on the WiFi network and then on a myriad of other systems for the blog, HR, SSH, build systems, etc. etc.

And when someone left all that had to be undone. And frequently this was done incorrectly. People would leave and accounts would still be left running for a period of time. This was a huge headache for us and is a huge headache for literally every company.

If I could tell companies one thing they can do to improve their security it would be: sort out identity and authentication. We did and it made things so much better.

This makes the process of bringing someone on board much smoother and the same when they leave. We can control who accesses what systems from a single control panel.

I have one login via a product we built called Cloudflare Access and I can get access to pretty much everything. I looked in my LastPass Vault while writing this talk and there are a total of just five username and password combination and two of those needed deleting because we’ve migrated those systems to Access.

So, yes, we use password managers. And we lock down everything with high quality passwords and two factor authentication. Everyone at Cloudflare has a Yubikey and access to TOTP (such as Google Authenticator). There are three golden rules: all passwords should be created by the password manager, all authentication has to have a second factor and the second factor cannot be SMS.

We had great fun rolling out Yubikeys to the company because we did it during our annual retreat in a single company wide sitting. Each year Cloudflare gets the entire company together (now over 1,000 people) in a hotel for two to three days of working together, learning from outside experts and physical and cultural activities.

Last year the security team gave everyone a pair of physical security tokens (a Yubikey and a Titan Key from Google for Bluetooth) and in an epic session configured everyone’s accounts to use them.

Note: do not attempt to get 500 people to sync Bluetooth devices in the same room at the same time. Bluetooth cannot cope.

Another important thing we implemented is automatic timeout of access to a system. If you don’t use access to a system you lose it. That way we don’t have accounts that might have access to sensitive systems that could potentially be exploited.

Openness

To return to the subject of Culture for a moment an important Cloudflare trait is openness.

Some of you may know that back in 2017 Cloudflare had a horrible bug in our software that became called Cloudbleed. This bug leaked memory from inside our servers into people’s web browsing. Some of that web browsing was being done by search engine crawlers and ended up in the caches of search engines like Google.

We had to do two things: stop the actual bug (this was relatively easy and was done in under an hour) and then clean up the equivalent of an oil spill of data. That took longer (about a week to ten days) and was very complicated.

But from the very first night when we were informed of the problem we began documenting what had happened and what were doing. I opened an EMACS buffer in the dead of night and started keeping a record.

That record turned into a giant disclosure blog post that contained the gory details of the error we made, its consequences and how we reacted once the error was known.

We followed up a few days later with a further long blog post assessing the impact and risk associated with the problem.

This approach to being totally open ended up being a huge success for us. It increased trust in our product and made people want to work with us more.

I was on my way to Berlin to give a talk to a large retailer about Cloudbleed when I suddenly realized that the company I was giving the talk at was NOT a customer. And I asked the salesperson I was with what I was doing.

I walked in to their 1,000 person engineering team all assembled to hear my talk. Afterwards the VP of Engineering thanked me saying that our transparency had made them want to work with us rather than their current vendor. My talk was really a sales pitch.

Similarly, at RSA last year I gave a talk about Cloudbleed and a very large company’s CSO came up and asked to use my talk internally to try to encourage their company to be so open.

When on July 2 this year we had an outage, which wasn’t security related, we once again blogged in incredible detail about what happened. And once again we heard from people about how our transparency mattered to them.

The lesson is that being open about mistakes increases trust. And if people trust you then they’ll tend to tell you when there are problems. I get a ton of reports of potential security problems via Twitter or email.

Change

After Cloudbleed we started changing how we write software. Cloudbleed was caused, in part, by the use of memory-unsafe languages. In that case it was C code that could run past the end of a buffer.

We didn’t want that to happen again and so we’ve prioritized languages where that simply cannot happen. Such as Go and Rust. We were very well known for using Go. If you’ve ever visited a Cloudflare website, or used an app (and you have because of our scale) that uses us for its API then you’ve first done a DNS query to one of our servers.

That DNS query will have been responded to by a Go program called RRDNS.

There’s also a lot of Rust being written at Cloudflare and some of our newer products are being created using it. For example, Firewall Rules which do arbitrary filtering of requests to our customers are handled by a Rust program that needs to be low latency, stable and secure.

Security is a company wide commitment

The other post-Cloudbleed change was that any crashes on our machines came under the spotlight from the very top. If a process crashes I personally get emailed about it. And if the team doesn’t take those crashes seriously they get me poking at them until they do.

We missed the fact that Cloudbleed was crashing our machines and we won’t let that happen again. We use Sentry to correlate information about crashes and the Sentry output is one of the first things I look at in the morning.

Which, I think, brings up an important point. I spoke earlier about our culture of “If you see something weird, say something” but it’s equally important that security comes from the top down.

Our CSO, Joe Sullivan, doesn’t report to me, he reports to the CEO. That sends a clear message about where security sits in the company. But, also, the security team itself isn’t sitting quietly in the corner securing everything.

They are setting standards, acting as trusted advisors, and helping deal with incidents. But their biggest role is to be a source of knowledge for the rest of the company. Everyone at Cloudflare plays a role in keeping us secure.

You might expect me to have access to our all our systems, a passcard that gets me into any room, a login for any service. But the opposite is true: I don’t have access to most things. I don’t need it to get my job done and so I don’t have it.

This makes me a less attractive target for hackers, and we apply the same rule to everyone. If you don’t need access for your job you don’t get it. That’s made a lot easier by the identity and authentication systems and by our rule about timing out access if you don’t use a service. You probably didn’t need it in the first place.

The flip side of all of us owning security is that deliberately doing the wrong thing has severe consequences.

Making a mistake is just fine. The person who wrote the bad line of code that caused Cloudbleed didn’t get fired, the person who wrote the bad regex that brought our service to a halt on July 2 is still with us.‌‌

Detection and Response‌‌

Naturally, things do go wrong internally. Things that didn’t get reported. To do with them we need to detect problems quickly. This is an area where the security team does have real expertise and data.‌‌

We do this by collecting data about how our endpoints (my laptop, a company phone, servers on the edge of our network) are behaving. And this is fed into a homebuilt data platform that allows the security team to alert on anomalies.‌‌

It also allows them to look at historical data in case of a problem that occurred in the past, or to understand when a problem started. ‌‌

Initially the team was going to use a commercial data platform or SIEM but they quickly realized that these platforms are incredibly expensive and they could build their own at a considerably lower price.‌‌

Also, Cloudflare handles a huge amount of data. When you’re looking at operating system level events on machines in 194 cities plus every employee you’re dealing with a huge stream. And the commercial data platforms love to charge by the size of that stream.‌‌

We are integrating internal DNS data, activity on individual machines, network netflow information, badge reader logs and operating system level events to get a complete picture of what’s happening on any machine we own.‌‌

When someone joins Cloudflare they travel to our head office in San Francisco for a week of training. Part of that training involves getting their laptop and setting it up and getting familiar with our internal systems and security.‌‌

During one of these orientation weeks a new employee managed to download malware while setting up their laptop. Our internal detection systems spotted this happening and the security team popped over to the orientation room and helped the employee get a fresh laptop.‌‌

The time between the malware being downloaded and detected was about 40 minutes.‌‌

If you don’t want to build something like this yourself, take a look at Google’s Chronicle product. It’s very cool. ‌‌

One really rich source of data about your organization is DNS. For example, you can often spot malware just by the DNS queries it makes from a machine. If you do one thing then make sure all your machines use a single DNS resolver and get its logs.‌‌‌‌

Edge Security‌‌

In some ways the most interesting part of Cloudflare is the least interesting from a security perspective. Not because there aren’t great technical challenges to securing machines in 194 cities but because some of the more apparently mundane things I’ve talked about how such huge impact.‌‌

Identity, Authentication, Culture, Detection and Response.‌‌

But, of course, the edge needs securing. And it’s a combination of physical data center security and software. ‌‌

To give you one example let’s talk about SSL private keys. Those keys need to be distributed to our machines so that when an SSL connection is made to one of our servers we can respond. But SSL private keys are… private!‌‌

And we have a lot of them. So we have to distribute private key material securely. This is a hard problem. We encrypt the private keys while at rest and in transport with a separate key that is distributed to our edge machines securely. ‌‌

Access to that key is tightly controlled so that no one can start decrypting keys in our database. And if our database leaked then the keys couldn’t be decrypted since the key needed is stored separately.‌‌

And that key is itself GPG encrypted.‌‌

But wait… there’s more!‌‌

We don’t actually want to have decrypted keys stored in any process that accessible from the Internet. So we use a technology called Keyless SSL where the keys are kept by a separate process and accessed only when needed to perform operations.‌‌

And Keyless SSL can run anywhere. For example, it doesn’t have to be on the same machine as the machine handling an SSL connection. It doesn’t even have to be in the same country. Some of our customers make use of that to specify where their keys are distributed to).

Use Cloudflare to secure Cloudflare

One key strategy of Cloudflare is to eat our own dogfood. If you’ve not heard that term before it’s quite common in the US. The idea is that if you’re making food for dogs you should be so confident in its quality that you’d eat it yourself.

Cloudflare does the same for security. We use our own products to secure ourselves. But more than that if we see that there’s a product we don’t currently have in our security toolkit then we’ll go and build it.

Since Cloudflare is a cybersecurity company we face the same challenges as our customers, but we can also build our way out of those challenges. In  this way, our internal security team is also a product team. They help to build or influence the direction of our own products.

The team is also a Cloudflare customer using our products to secure us and we get feedback internally on how well our products work. That makes us more secure and our products better.

Our customers data is more precious than ours‌‌

The data that passes through Cloudflare’s network is private and often very personal. Just think of your web browsing or app use. So we take great care of it.‌‌

We’re handling that data on behalf of our customers. They are trusting us to handle it with care and so we think of it as more precious than our own internal data.‌‌

Of course, we secure both because the security of one is related to the security of the other. But it’s worth thinking about the data you have that, in a way, belongs to your customer and is only in your care.‌‌‌‌

Finally‌‌

I hope this talk has been useful. I’ve tried to give you a sense of how Cloudflare thinks about security and operates. We don’t claim to be the ultimate geniuses of security and would love to hear your thoughts, ideas and experiences so we can improve.‌‌

Security is not static and requires constant attention and part of that attention is listening to what’s worked for others.‌‌

Thank you.‌‌‌‌‌‌‌‌‌‌‌‌

Monday, 07 October

18:00

Discovering Popular Dishes with Deep Learning [Yelp Engineering and Product Blog]

Discovering Popular Dishes with Deep Learning Introduction Yelp is home to nearly 200 million user-submitted reviews and even more photos. This data is rich with information about businesses and user opinions. Through the application of cutting-edge machine learning techniques, we’re able to extract and share insights from this data. In particular, the Popular Dishes feature leverages Yelp’s deep data to take the guesswork out of what to order. For more details on the product itself, check out our product launch blog post. The Popular Dishes feature highlights the most talked about and photographed dishes at a restaurant, gathering user opinions...

07:04

Saturday Morning Breakfast Cereal - Time [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I heard a sound, as if millions of nears whined out about what exactly the definition of Time Travel should be...


Today's News:

02:00

IceWM – A really cool desktop [Fedora Magazine]

IceWM is a very lightweight desktop. It’s been around for over 20 years, and its goals today are still the same as back then: speed, simplicity, and getting out of the users way.

I used to add IceWM to Scientific Linux, for a lightweight desktop. At the time, it was only a .5 Meg rpm. When running, it used only 5 Meg of memory. Over the years, IceWM has grown a little bit. The rpm package is now 1 Meg. When running, IceWM now uses 10 Meg of memory. Even though it literally doubled in size in the past 10 years, it is still extremely small.

What do you get in such a small package? Exactly what it says, a Window Manager. Not much else. You have a toolbar with a menu or icons to launch programs. You have speed. And finally you have themes and options. Besides the few goodies in the toolbar, that’s about it.

Installation

Because IceWM is so small, you just install the main package, and default theme. In Fedora 31, the default theme will be part of the main package.

Fedora 30 / IceWM 1.3.8

$ sudo dnf install icewm icewm-clearlooks

Fedora 31/ IceWM 1.6.2

$ sudo dnf install icewm

In Fedora 31, the IceWM package will allow you to save disk space. Many of the dependencies are soft options.

$ sudo dnf install icewm --setopt install_weak_deps=false

Options

The defaults for IceWM are set so that your average windows user feels comfortable. This is a good thing, because options are done manually, through configuration files.

I hope I didn’t lose you there, because it’s not as bad as it sounds. There are only 8 configuration files, and most people only use a couple. The main three config files are keys (keybinding), preferences (overall preferences), and toolbar (what is shown on the toolbar). The default config files are found in /usr/share/icewm/

To make a change, you copy the default config to you home icewm directory (~/.icewm), edit the file, and then restart IceWM. The first time you do this might be a little scary because “Restart Icewm” is found under the “Logout” menu entry. But when you restart IceWM, you just see a single flicker, and your changes are there. Any open programs are unaffected and stay as they were.

Themes

IceWM in the NanoBlue theme

If you install the icewm-themes package, you get quite a few themes. Unlike regular options, you do not need to restart IceWM to change into a new theme. Usually I wouldn’t talk much about themes, but since there are so few other features, I figured I’m mention them.

Toolbar

The toolbar is the one place where a few extra features have been added to IceWM. You will see that you can switch between workplaces. Workspaces are sometimes called Virtual Desktops. Click on the workspace to move to it. Right clicking on a windows taskbar entry allows you to move it between workspaces. If you like workspaces, this has all the functionality you will like. If you don’t like workspaces, it’s an option and can be turned off.

The toolbar also has Network/Memory/CPU monitoring graphs. Hover your mouse over the graph to get details. Click on the graph to get a window with full monitoring. These little graphs used to be on every window manager. But as those desktops matured, they have all taken the graphs out. I’m very glad that IceWM has left this nice feature alone.

Summary

If you want something lightweight, but functional, IceWM is the desktop for you. It is setup so that new linux users can use it out of the box. It is flexible so that unix users can tweak it to their liking. Most importantly, IceWM lets your programs run without getting in the way.

Sunday, 06 October

09:13

Saturday Morning Breakfast Cereal - Prime Mover [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I wonder if today's the day I meet the vehement white-socks-black-shoes crowd.


Today's News:

Saturday, 05 October

09:53

Saturday Morning Breakfast Cereal - Ew [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
It's all fun and games until you end up with a kid who's only got 50% of your genes.


Today's News:

Friday, 04 October

04:30

Saturday Morning Breakfast Cereal - Enemy [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
And, because fundamental physics isn't worked out, enemies are everywhere!


Today's News:

02:00

In Fedora 31, 32-bit i686 is 86ed [Fedora Magazine]

The release of Fedora 31 drops the 32-bit i686 kernel, and as a result bootable images. While there may be users out there who still have hardware which will not work with the 64-bit x86_64 kernel, there are very few. However, this article gives you the whole story behind the change, and what 32-bit material you’ll still find in Fedora 31.

What is happening?

The i686 architecture essentially entered community support with the Fedora 27 release. Unfortunately, there are not enough members of the community willing to do the work to maintain the architecture. Don’t worry, though — Fedora is not dropping all 32-bit packages. Many i686 packages are still being built to ensure things like multilib, wine, and Steam will continue to work.

While the repositories are no longer being composed and mirrored out, there is a koji i686 repository which works with mock for building 32-bit packages, and in a pinch to install 32-bit versions which are not part of the x86_64 multilib repository. Of course, maintainers expect this will see limited use. Users who simply need to run a 32-bit application should be able to do so with multilib on a 64-bit system.

What to do if you’re running 32-bit

If you still run 32-bit i686 installations, you’ll continue to receive supported Fedora updates through the Fedora 30 lifecycle. This is until roughly May or June of 2020. At that point, you can either reinstall as 64-bit x86_64 if your hardware supports it, or replace your hardware with 64-bit capable hardware if possible.

There is a user in the community who has done a successful “upgrade” from 32-bit Fedora to 64-bit x86 Fedora. While this is not an intended or supported upgrade path, it should work. The Project hopes to have some documentation for users who have 64-bit capable hardware to explain the process before the Fedora 30 end of life.

If you have a 64-bit capable CPU running 32-bit Fedora due to low memory, try one of the alternate desktop spins. LXDE and others tend to do fairly well in memory constrained environments. For users running simple servers on old 32-bit hardware that was just lying around, consider one of the newer ARM boards. The power savings alone can more than pay for the new hardware in many instances. And if none of these are on option, CentOS 7 offers a 32-bit image with longer term support for the platform.

Security and you

While some users may be tempted to keep running an older Fedora release past end of life, this is highly discouraged. People constantly research software for security issues. Often times, they find these issues which have been around for years.

Once Fedora maintainers know about such issues, they typically patch for them, and make updates available to supported releases — but not to end of life releases. And of course, once these vulnerabilities are public, there will be people trying to exploit them. If you run an older release past end of life, your security exposure increases over time as a result, putting your system at ever-growing risk.


Photo by Alexandre Debiève on Unsplash.

Thursday, 03 October

18:00

Hosting Our First Awesome Women in Engineering Summit in SF [Yelp Engineering and Product Blog]

Last month, we held our first Awesome Women in Engineering (AWE) Summit at our headquarters in San Francisco. AWE’s mission is to build a strong community for women and allies in our engineering and product departments by facilitating professional career-building activities, leadership, and mentorship opportunities. As a resource group, we provide support and organize activities targeted towards professional growth for women, helping them to maximize their potential at Yelp and beyond. The summit was an internal, half-day event for women and allies in engineering and product at Yelp. We had previously hosted a summit for our EU offices, but this...

06:22

Saturday Morning Breakfast Cereal - Anything [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Satan is always leaving us nice little surprises, but for some reason we don't like him.


Today's News:

Wednesday, 02 October

10:25

Serverlist Sept. Wrap-up: Static sites, serverless costs, and more [The Cloudflare Blog]

Serverlist Sept. Wrap-up: Static sites, serverless costs, and more

Check out our eighth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.

Sign up below to have The Serverlist sent directly to your mailbox.

09:10

Engineering Career Development at Etsy [Code as Craft]

In late May of 2018, Etsy internally released an Engineering Career Ladder. Today, we’re sharing that ladder publicly and detailing why we decided to build it, why the content is what it is, and how it’s been put into use since its release.

Take a look

Defining a Career Ladder

A career ladder is a tool to help outline an engineer’s path for growth within a company. It should provide guidance to engineers on how to best take on new responsibilities, and allow their managers to assess and monitor performance and behavior. A successful career ladder should align career progression with a company’s culture, business goals, and guiding principles and act as a resource to guide recruiting, training, and performance assessments.

Etsy has had several forms of a career ladder before this iteration. The prior career ladders applied to all Etsy employees, and had a set of expectations for every employee in the same level across all disciplines. Overall, these previous ladders worked well for Etsy as a smaller company, but as the engineering team continued to grow we found the content needed updating to meet practical expectations, as the content in the ladder started to feel too broad and unactionable.

As a result, we developed this career ladder, specific to engineering, to allow us to be more explicit with those expectations and create a unified understanding of what it means to be an engineer at a certain level at Etsy. This ladder has been in place for over a year now, and in that time we’ve gone through performance reviews, promotion cycles, lots of hiring, and one-on-one career development conversations. We’re confident that we’ve made a meaningful improvement to engineering career development at Etsy and hope that releasing this career ladder publicly can help other companies support engineering career growth as well.

Designing the Etsy Engineering Career Ladder

We formed a working group focused on creating a new iteration of the career ladder comprised of engineers and engineering managers of various levels. The working group included Miriam Lauter, Dan Auerbach, and Jason Wain, and me. We started by exploring our current company-wide career ladder, discussing its merits and limitations, and the impact it had on engineering career development. We knew that any new version needed to be unique to Etsy, but we spent time exploring publicly available ladders of companies who had gone through a similar process in an effort to understand both tactical approaches and possible formats. Many thanks specifically to Spotify, Kickstarter, Riot Games, and Rent the Runway for providing insight into their processes and outcomes. Reviewing their materials was invaluable.

We decided our first step was to get on the same page as to what our goals were, and went through a few exercises resulting in a set of tenets that we felt would drive our drafting process and provide a meaningful way to evaluate the efficacy of the content. These tenets provided the foundation to our approach for developing the ladder.

The Tenets

Support meaningful career growth for engineers

Our career ladder should be clear enough, and flexible enough, to provide direction for any engineer at the company. We intended this document to provide actionable steps to advance your career in a way that is demonstrably impactful. Ideally, engineers would use this ladder to reflect on their time at Etsy and say “I’ve developed skills here I’ll use my entire career.”

Unify expectations across engineering

We needed to build alignment across the entire engineering department about what was required to meet the expectations of a specific level. If our career ladder were too open to interpretation it would cause confusion, particularly as it relates to the promotion process. We wanted to ensure that everyone had a succinct, memorable way to describe our levels, and understand exactly how promotions happen and what is expected of themselves and their peers.

Recognize a variety of valid career paths

Whether you’re building machine learning models or localizing our products, engineering requires skills across a range of competencies, and every team and project takes individuals with strengths in each. We wanted to be explicit about what we believe about the discipline, that valid and meaningful career paths exist at all levels for engineers who bring differences of perspectives and capabilities, and that not everyone progresses as an engineer in the same way. We intended to codify that we value growth across a range of competencies, and that we don’t expect every person to have the same set of strengths at specific points in their career.

Limit room for bias in how we recognize success

A career ladder is one in a set of tools that can help an organization mitigate potential bias. We needed to be thoughtful about our language, ensuring that it is inclusive, objective, and action oriented. We knew the career ladder would be used as basis for key career advancement moments, such as hiring and promotions, so developing a clear and consistent ladder was critical for mitigating potential bias in these processes.

Developing the Etsy Engineering Career Ladder

With these tenets in place, we had the first step towards knowing what was necessary for success. In addition to creating draft ladder formats, we set about determining how we could quantify the improvements that we were making. We outlined key areas where we’d need to directly involve our stakeholders, including engineering leadership, HR, Employee Resource Groups, and of course engineers. We made sure to define multiple perspectives for which the ladder should be a utility; e.g. an engineer looking to get promoted, a manager looking to help guide an engineer to promotion, or a manager who needed to give constructive performance feedback.

Implicit biases can be notoriously difficult to acknowledge and remove from these processes, and we knew that in order to do this as best as possible we’d need to directly incorporate feedback from many individuals, both internal and external, across domains and disciplines, and with a range of perspectives, to assure that we were building those perspectives into the ladder.

Our tactics for measuring our progress included fielding surveys and requests for open feedback, as well as direct 1:1 in-depth feedback sessions and third party audits to ensure our language was growth-oriented and non-idiomatic. We got feedback on structure and organization of content, comprehension of the details within the ladder, the ladder’s utility when it came to guiding career discussions, and alignment with our tenets.

The feedback received was critical in shaping the ladder. It helped us remove duplicative, unnecessary, or confusing content and create a format that we thought best aligned with our stated tenets and conveyed our intent. 

And finally, the Etsy Engineering Career Ladder

You can find our final version of the Etsy Engineering Career Ladder here.

The Etsy Engineering Career Ladder is split into two parts: level progression and competency matrix. This structure explicitly allows us to convey how Etsy supports a variety of career paths while maintaining an engineering-wide definition of each level. The level progression is the foundation of the career ladder. For each level, the ladder lays out all requirements including expectations, track record, and competency guidelines. The competency matrix lays out the behaviors and skills that are essential to meeting the goals of one’s role, function, or organization.

Level Progression

Each section within the level progression provides a succinct definition of the requirements for an engineer with that title. It details a number of factors, including the types of problems an engineer is solving, the impact of their work on organizational goals and priorities and how they influence others that they work with. For levels beyond Engineer I, we outline an expected track record, detailing achievements over a period of time in both scale and complexity. And to set expectations for growth of competencies, we broadly outline what levels of mastery an engineer needs to achieve in order to be successful.

Competencies

If the level progression details what is required of an engineer at a certain level, competencies detail how we expect they can meet those expectations. We’ve outlined five core competency areas:

  • Delivery
  • Domain Expertise
  • Problem Solving
  • Communication
  • Leadership

For each of these five competency areas, the competency matrix provides a list of examples that illustrate what it means to have achieved various levels of mastery. Mastery of a competency is cumulative — someone who is “advanced” in problem solving is expected to retain the skills and characteristics required for an “intermediate” or “beginner” problem solver.

Evaluating our Success

We internally released this new ladder in May of 2018. We did not immediately make any changes to our performance review processes, as it was critical to not change how we were evaluating success in the middle of a cycle. We merely released it as a reference for engineers and their managers to utilize when discussing career development going forward. When our next performance cycle kicked off, we began incorporating details from the ladder into our documentation and communications, making sure that we were using it to set the standards for evaluation.

Today, this career ladder is one of the primary tools we use for guiding engineer career growth at Etsy. Utilizing data from company-wide surveys, we’ve seen meaningful improvement in how engineers see their career opportunities as well as growing capabilities for managers to guide that growth.

Reflecting on the tenets outlined at the beginning of the process allows us to look back at the past year and a half and recognize the change that has occurred for engineers at Etsy and evaluate the ladder against the goals we believed would make it a success. Let’s look back through each tenet and see how we accomplished it.

Support meaningful career growth for engineers

While the content is guided by our culture and Guiding Principles, generally none of the competencies are Etsy-specific. The expectations, track record, and path from “beginner” to “leading expert” in a competency category are designed to show the growth of an engineer’s impact and recognize accomplishments that they can carry throughout their career, agnostic of their role, team, or even company.

The competency matrix also allows us to guide engineer career development within a level. While a promotion to a new level is a key milestone that requires demonstration of meeting expectations over time, advancing your level of mastery by focusing on a few key competencies allows engineers to demonstrate continual growth, even within the same level. This encourages engineers and their managers to escape the often insurmountable task of developing a plan to achieve the broader set of requirements for the next promotion, and instead create goals that help them get there incrementally.

Compared to our previous ladder, the path to Staff Engineer is no longer gated by the necessity to increase one’s breadth. We recognized that every domain has significantly complex, unscoped problems that need to be solved, and that we were limiting engineer growth by requiring those who were highly successful in their domain to expand beyond it. Having expectations outlined as they are now allows engineers the opportunity to grow by diving more deeply into their current domains.

Unify expectations across engineering

The definition for each level consists only of a few expectations, a track record, and guidelines for level of mastery of competencies. It is easy to parse, and to refer back to to get a quick understanding of the requirements. With a little reflection, it should be easy to describe how any engineer meets the three to five expectations of their level.

Prior to release, we got buy-in from every organizational leader in engineering that these definitions aligned with the reality of the expectations of engineers in their org. Since release we’ve aligned our promotion process to the content in the ladder. We require managers to outline how a candidate has met the expectations over the requisite period stated in the track record for their new level, and qualify examples of how they demonstrate the suggested level of mastery for competencies.

Recognize a variety of valid career paths

We ask managers to utilize the competencies document with their reports’ specific roles in mind when talking about career progression. Individual examples within the competency matrix may feel more or less applicable to individual roles, such as a Product Engineer or a Security Engineer, and this adaptability allows per-discipline growth while still aligning with the behaviors and outcomes we agree define a level of mastery. A small set of example skills is provided for each competency category that can help to better contextualize the application of the competencies in various domains. Additionally, we intentionally do not detail any competencies for which success is reliant on your team or organization.

Allowing managers to embrace the flexibility inherent in the competency matrix and its level of mastery system has allowed us to universally recognize engineer growth as it comes in various forms, building teams that embrace differences and value success in all its shapes. Managers can grow more diverse teams, for instance, by being able to recognize engineering leaders who are skilled domain experts, driving forward technical initiatives, and other engineering leaders who are skilled communicators, doing the glue work and keeping the team aligned on solving the right problems. We recognize that leadership takes many forms, and that is reflected in our competency matrix.

Limit room for bias in how we recognize success

The career ladder is only a piece of how we can mitigate potential bias as an organization. There are checks and balances built into other parts of Etsy’s human resources processes and career development programs, but since a career ladder plays such a key role in shaping the other processes, we approached this tenet very deliberately.

The competencies are not personality based, as we worked to remove anything that could be based on subjective perception of qualities or behaviors, such as “being friendly.” All content is non-idiomatic, in an effort to reduce differences in how individuals will absorb or comprehend the content. We also ensured that the language was consistent between levels by defining categories for each expectation. For instance, defining the expected complexity of the problems engineers solve per level allowed us to make sure we weren’t introducing any leaps in responsibility between levels that couldn’t be tied back to growth in the previous level. 

We also explicitly avoided any language that reads as quantifiable (e.g. “you’ve spoken at two or more conferences”) as opportunities to achieve a specific quantity of anything can be severely limited by your role, team, or personal situation, and can lead to career advice that doesn’t get at the real intent behind the competency. Additionally, evaluation of an individual against the ladder, for instance as part of a promotion, is not summarized in numbers. There is no score calculation or graphing an individual on a chart, nor is there an explicit number of years in role or projects completed as an expectation. While reducing subjectivity is key to mitigating potential bias, rigid numerical guidelines such as these can actually work against our other tenets by not allowing sufficient flexibility given an individual’s role.

Most importantly, the ladder was shaped directly through feedback from Etsy engineers, who have had direct personal experiences with how their individual situations may have helped or hindered their careers to draw on.

We’re really passionate about supporting ongoing engineer career growth at Etsy, and doing it in a way that truly supports our mission. We believe there’s a path to Principal Engineer for every intern and that this ladder goes a long way in making that path clear and actionable. We hope this ladder can serve as an example, in addition to those we took guidance from, to help guide the careers of engineers everywhere.

If you’re interested in growing your career with us, we’d love to talk, just click here to learn more.

08:24

Saturday Morning Breakfast Cereal - Talk [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
See, this is why trying to control the media is severely overrated.


Today's News:

Available in just a few weeks! If you know any media sources you'd like to hear interview us about this topic, please bug them on our behalf.



08:22

Saturday Morning Breakfast Cereal - Magnitude [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I'm informed that technically an order of magnitude down would be 1/10th of a deity. But, if you just physics the number one more time, you arrive at exactly 0.


Today's News:

It's a double update day, thanks to early buyers!

08:02

Fedora projects for Hacktoberfest [Fedora Magazine]

It’s October! That means its time for the annual Hacktoberfest presented by DigitalOcean and DEV. Hacktoberfest is a month-long event that encourages contributions to open source software projects. Participants who register and submit at least four pull requests to GitHub-hosted repositories during the month of October will receive a free t-shirt.

In a recent Fedora Magazine article, I listed some areas where would-be contributors could get started contributing to Fedora. In this article, I highlight some specific projects that provide an opportunity to help Fedora while you participate in Hacktoberfest.

Fedora infrastructure

  • Bodhi — When a package maintainer builds a new version of a software package to fix bugs or add new features, it doesn’t go out to users right away. First it spends time in the updates-testing repository where in can receive some real-world usage. Bodhi manages the flow of updates from the testing repository into the updates repository and provides a web interface for testers to provide feedback.
  • the-new-hotness — This project listens to release-monitoring.org (which is also on GitHub) and opens a Bugzilla issue when a new upstream release is published. This allows package maintainers to be quickly informed of new upstream releases.
  • koschei — koschei enables continuous integration for Fedora packages. It is software for running a service for scratch-rebuilding RPM packages in Koji instance when their build-dependencies change or after some time elapses.
  • MirrorManager2 — Distributing Fedora packages to a global user base requires a lot of bandwidth. Just like developing Fedora, distributing Fedora is a collaborative effort. MirrorManager2 tracks the hundreds of public and private mirrors and routes each user to the “best” one.
  • fedora-messaging — Actions within the Fedora community—from source code commits to participating in IRC meetings to…lots of things—generate messages that can be used to perform automated tasks or send notifications. fedora-messaging is the tool set that makes sending and receiving these messages possible.
  • fedocal — When is that meeting? Which IRC channel was it in again? Fedocal is the calendar system used by teams in the Fedora community to coordinate meetings. Not only is it a good Hacktoberfest project, it’s also looking for a new maintainer to adopt it.

In addition to the projects above, the Fedora Infrastructure team has highlighted good Hacktoberfest issues across all of their GitHub projects.

Community projects

  • bodhi-rs — This project provides Rust bindings for Bodhi.
  • koji-rs — Koji is the system used to build Fedora packages. Koji-rs provides bindings for Rust applications.
  • fedora-rs — This project provides a Rust library for interacting with Fedora services like other languages like Python have.
  • feedback-pipeline — One of the current Fedora Council objectives is minimization: work to reduce the installation and patching footprint of Fedora releases. feedback-pipeline is a tool developed by this team to generate reports of RPM sizes and dependencies.

And many more

The projects above are only a small sample focused on software used to build Fedora. Many Fedora packages have upstreams hosted on GitHub—too many to list here. The best place to start is with a project that’s important to you. Any contributions you make help improve the entire open source ecosystem. If you’re looking for something in particular, the Join Special Interest Group can help. Happy hacking!

Tuesday, 01 October

19:00

Learn more about Workers Sites at Austin & San Francisco Meetups [The Cloudflare Blog]

Learn more about Workers Sites at Austin & San Francisco Meetups
Learn more about Workers Sites at Austin & San Francisco Meetups

Last Friday, at the end of Cloudflare’s 9th birthday week, we announced Workers Sites.

Now, using the Wrangler CLI, you can deploy entire websites directly to the Cloudflare Network using Cloudflare Workers and Workers KV. If you can statically generate the assets for your site, think create-react-app, Jekyll, or even the WP2Static plugin, you can deploy it to our global network, which spans 194 cities in more than 90 countries.

If you’d like to learn more about how it was built, you can read more about this in the technical blog post. Additionally, I wanted to give you an opportunity to meet with some of the developers who contributed to this product and hear directly from them about their process, potential use cases, and what it took to build.

Check out these events. If you’re based in Austin or San Francisco (more cities coming soon!), join us on-site. If you’re based somewhere else, you can watch the recording of the events afterwards.

Growing Dev Platforms at Scale & Deploying Static Websites

Talk 1: Inspiring with Content: How to Grow Developer Platforms at Scale

Serverless platforms like Cloudflare Workers provide benefits like scalability, high performance, and lower costs. However, when talking to developers, one of the most common reactions is, "this sounds interesting, but what do I build with it?"

In this talk, we’ll cover how at Cloudflare we’ve been able to answer this question at scale with Workers Sites. We’ll go over why this product exists and how the implementation leads to some unintended discoveries.

Speaker Bio:
Victoria Bernard is a full-stack, product-minded engineer focused on Cloudflare Workers Developer Experience. An engineer who started a career working at large firms in hardware sales and moved throughout Cloudflare from support to product and to development. Passionate about building products that make developer lives easier and more productive.

Talk 2:  Extending a Serverless Platform: How to Fake a File System…and Get Away With It

When building a platform for developers, you can’t anticipate every use case. So, how do you build new functionality into a platform in a sustainable way, and inspire others to do the same?

Let’s talk about how we took a globally distributed serverless platform (Cloudflare Workers) and key-value store (Workers KV) intended to store short-lived data and turned them into a way to easily deploy static websites. It wasn’t a straightforward journey, but join as we overcome roadblocks and learn a few lessons along the way.

Speaker Bio:
Ashley Lewis headed the development of the features that became Workers Sites. She's process and collaboration oriented and focused on user experience first at every level of the stack. Ashley proudly tops the leaderboard for most LOC deleted.

Agenda:

  • 6:00pm - Doors open
  • 6:30pm - Talk 1: Inspiring with Content: How to Grow Developer Platforms at Scale
  • 7:00pm - Talk 2:  Extending a Serverless Platform: How to Fake a File System…and Get Away With It
  • 7:30pm - Networking over food and drinks
  • 8:00pm - Event conclusion

Austin, Texas Meetup

Learn more about Workers Sites at Austin & San Francisco Meetups

Register Here »

San Francisco, California Meetup

Learn more about Workers Sites at Austin & San Francisco Meetups

Register Here »

While you’re at it, check out our monthly developer newsletter: The Serverlist


Have you built something interesting with Workers? Let us know @CloudflareDev!

09:17

Saturday Morning Breakfast Cereal - Accurate [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
This comic loosely based on Bob Berman's delightfully curmudgeonly column in the October issue of Astronomy magazine.


Today's News:

Monday, 30 September

11:00

Not so static... Introducing the HTMLRewriter API Beta to Cloudflare Workers [The Cloudflare Blog]

Not so static... Introducing the HTMLRewriter API Beta to Cloudflare Workers
Not so static... Introducing the HTMLRewriter API Beta to Cloudflare Workers

Today, we’re excited to announce HTMLRewriter beta — a streaming HTML parser with an easy to use selector based JavaScript API for DOM manipulation, available in the Cloudflare Workers runtime.

For those of you who are unfamiliar, Cloudflare Workers is a lightweight serverless platform that allows developers to leverage Cloudflare’s network to augment existing applications or create entirely new ones without configuring or maintaining infrastructure.

Static Sites to Dynamic Applications

On Friday we announced Workers Sites: a static site deployment workflow built into the Wrangler CLI tool. Now, paired with the HTML Rewriter API, you can perform DOM transformations on top of your static HTML, right on the Cloudflare edge.

You could previously do this by ingesting the entire body of the response into the Worker, however, that method was prone to introducing a few issues. First, parsing a large file was bound to run into memory or CPU limits. Additionally, it would impact your TTFB as the body could no longer be streamed, and the browser would be prevented from doing any speculative parsing to load subsequent assets.

HTMLRewriter was the missing piece to having your application fully live on the edge – soup to nuts. You can build your API on Cloudflare Workers as a serverless function, have the static elements of your frontend hosted on Workers Sites, and dynamically tie them together using the HTMLRewriter API.

Enter JAMStack

You may be thinking “wait!”, JavaScript, serverless APIs… this is starting to sound a little familiar. It sounded familiar to us too.

Is this JAMStack?

First, let’s answer the question — what is JAMStack? JAMStack is a term coined by Mathias Biilmann, that stands for JavaScript, APIs, and Markup. JAMStack applications are intended to be very easy to scale since they rely on simplified static site deployment. They are also intended to simplify the web development workflow, especially for frontend developers, by bringing data manipulation and rendering that traditionally happened on the backend to the front-end and interacting with the backend only via API calls.

So to that extent, yes, this is JAMStack. However, HTMLRewriter takes this idea one step further.

The Edge: Not Quite Client, Not Quite Server

Most JAMStack applications rely on client-side calls to third-party APIs, where the rendering can be handled client-side using JavaScript, allowing front end developers to work with toolchains and languages they are already familiar with. However, this means with every page load the client has to go to the origin, wait for HTML and JS, and then after being parsed and loaded make multiple calls to APIs. Additionally, all of this happens on client-side devices which are inevitably less powerful machines than servers and have potentially flaky last-mile connections.

With HTMLRewriter in Workers, you can make those API calls from the edge, where failures are significantly less likely than on client device connections, and results can often be cached. Better yet, you can write the APIs themselves in Workers and can incorporate the results directly into the HTML — all on the same powerful edge machine. Using these machines to perform “edge-side rendering” with HTMLRewriter always happens as close as possible to your end users, without happening on the device itself, and it eliminates the latency of traveling all the way to the origin.

What does the HTMLRewriter API look like?

The HTMLRewriter class is a jQuery-like experience directly inside of your Workers application, allowing developers to build deeply functional applications, leaning on a powerful JavaScript API to parse and transform HTML.

Below is an example of how you can use the HTMLRewriter to rewrite links on a webpage from HTTP to HTTPS.

const REWRITER = new HTMLRewriter()
    .on('a.avatar', { element:  e => rewriteUrl(e, 'href') })
    .on('img', { element: e => rewriteUrl(e, 'src') });

async function handleRequest(req) {
  const res = await fetch(req);
  return REWRITER.transform(res);
}

In the example above, we create a new instance of HTMLRewriter, and use the selector to find all instances of a and img elements, and call the rewriteURL function on the href and src properties respectively.

Internationalization and localization tutorial: If you’d like to take things further, we have a full tutorial on how to make your application i18n friendly using HTMLRewriter.

Not so static... Introducing the HTMLRewriter API Beta to Cloudflare Workers

Getting started

If you’re already using Cloudflare Workers, you can simply get started with the HTMLRewriter by consulting our documentation (no sign up or anything else required!). If you’re new to Cloudflare Workers, we recommend starting out by signing up here.

If you’re interested in the nitty, gritty details of how the HTMLRewriter works, and learning more than you’ve ever wanted to know about parsing the DOM, stay tuned. We’re excited to share the details with you in a future post.

One last thing, you are not limited to Workers Sites only. Since Cloudflare Workers can be deployed as a proxy in front of any application you can use the HTMLRewriter as an elegant way to augment your existing site, and easily add dynamic elements, regardless of backend.

We love to hear from you!

We’re always iterating and working to improve our product based on customer feedback! Please help us out by filling out our survey about your experience.


Have you built something interesting with Workers? Let us know @CloudflareDev!

08:23

¡Bienvenidos a Latinflare! [The Cloudflare Blog]

¡Bienvenidos a Latinflare!
¡Bienvenidos a Latinflare!

Our Story

When I first began interviewing with Cloudflare in the Spring of 2019, I came across a Cloudflare blog post announcing Proudflare, the company’s LGBTQIA+ Employee Resource Group (ERG). The post gave me a clear sense of the company’s commitment to diversity and inclusion. I could tell this was a place that values and celebrates diversity, which really appealed to me as I progressed through the interview process with Cloudflare, and ultimately accepted the role.

Fast forward to my Cloudflare new hire orientation, two weeks of training and introductions at our San Francisco HQ. We learned about the various ERGs at Cloudflare including one for Latinx employees. While I had a strong desire to be part of a Latinx ERG, it was clear that the group was actually in need of someone to lead the effort and rally the troops. At Cloudflare, we have offices across the country and around the world. I wasn’t really sure how to launch an ERG that would be global in scope. After meeting with leads from other Cloudflare ERGs, understanding the landscape, and attending an external workshop, everything started to come together.

In early August, we officially gave ourselves the name Latinflare. In mid-September, we agreed on our amazing logo (which by the way, includes the primary colors of flags from across Latin America set over a lava lamp background). Most importantly, we have agreed, as a group, that our priorities are:

  • to offer a space where Latinx employees and their allies can gather and network,
  • to create a pipeline of future employees of diverse backgrounds, and
  • to be an integral part of the communities where we work.
¡Bienvenidos a Latinflare!
A mural of Frida Kahlo captured on the streets of Buenos Aires. The mural took the collective of three artists – Julián Campos Segovia, Jean Paul Jesses and Juan Carlos Campos – three weeks to paint

What’s Next for Latinflare

We are gearing up for Hispanic Heritage Month. These efforts include launching Latinflare, holding our inaugural event on October 16th, and continuing to plan more events and activities moving forward. Great things are starting to happen!

How you can support

If you are not a Cloudflare employee but are interested in celebrating Hispanic Heritage, I urge you to find events and activities that are taking place near you. And while our inaugural Latinflare event will be an employee-only event, the group has high hopes to host quarterly meet-ups that will eventually give us the opportunity to network with ERGs and organizations outside of Cloudflare. In addition, you will hear from us again towards  the end of the year, when we plan to share some “tradiciones navideñas” with the rest of the Cloudflare family.  

Happy Hispanic Heritage Month to all! Latinflare stickers will be available in most offices starting this week. If you are not a Cloudflare employee, but are located near a Cloudflare office, please stop by the front desk at your location and ask for one. Stickers for everyone!  

¡Bienvenidos a Latinflare!
¡Bienvenidos a Latinflare!
NYC Office celebrates the launch of Latinflare!!‌‌
¡Bienvenidos a Latinflare!
Latinflare London - PRESENTE!!
¡Bienvenidos a Latinflare!
Latinflare Miami enjoying a Peruvian lunch :-)
¡Bienvenidos a Latinflare!
Latinflare at our Headquarters in San Francisco
¡Bienvenidos a Latinflare!
Proud Latinflarians representing Austin, TX!

07:25

Saturday Morning Breakfast Cereal - Trading [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I happen to be reading a book about the history of conspiracy theories today, so let me just say for the record that I don't believe a flaming ram's skull interns at high frequency trading firms.


Today's News:

02:00

Contribute at the kernel and IoT edition Fedora test days [Fedora Magazine]

Fedora test days are events where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are two upcoming test days in the upcoming week. The first, starts on Monday 30 September through Monday 07 October, is to test the Kernel 5.3. Wednesday October 02, the test day is focusing on Fedora 31 IoT Edition. Come and test with us to make the upcoming Fedora 31 even better.

Kernel test week

The kernel team is working on final integration for kernel 5.3. This version was just recently released and will arrive soon in Fedora. This version will also be the shipping kernel for Fedora 31. As a
result, the Fedora kernel and QA teams have organized a test week for
Monday, Sept 30 through Monday, October 07. Refer to the wiki page for links to the test images you’ll need to participate. The steps are clearly outlined in this document.

Fedora IoT Edition test day

Fedora Internet of Things is a variant of Fedora focused on IoT ecosystems. Whether you’re working on a home assistant, industrial gateways, or data storage and analytics, Fedora IoT provides a trusted open source platform to build on. Fedora IoT produces a monthly rolling release to help you keep your ecosystem up-to-date. The IoT and QA teams will have this test day for on Wednesday, October 02. Refer to the wiki page for links and resources to test the IoT Edition.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days are on the wiki pages above. If you’re available on or around the days of the events, please do some testing and report your results.

Sunday, 29 September

08:37

Saturday Morning Breakfast Cereal - School [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I have this idea for a talk radio show that just repeats, 24 hours a day, 'the rate of occurrence is almost always more important than the existence of an occurrence.'


Today's News:

Speaking of polemicists, the book on Open Borders policy is out in just one month!

Saturday, 28 September

16:54

Cloudflare’s protection against a new Remote Code Execution vulnerability (CVE-2019-16759) in vBulletin [The Cloudflare Blog]

Cloudflare’s protection against a new Remote Code Execution vulnerability (CVE-2019-16759) in vBulletin

Cloudflare has released a new rule as part of its Cloudflare Specials Rulesets, to protect our customers against a high-severity vulnerability in vBulletin.  

A new zero-day vulnerability was discovered for vBulletin, a proprietary Internet forum software. By exploiting this vulnerability, bad actors could potentially gain privileged access and control to the host servers on which this software runs, through Remote Code Execution (RCE).

Implications of this vulnerability

At Cloudflare, we use three key indicators to understand the severity of a vulnerability 1) how many customers on Cloudflare are running the affected software 2) the Common Vulnerability Scoring System (CVSS) score, and 3) the OWASP Top 10, an open-source security framework.

We assess this vulnerability to be very significant as it has a CVSS score of 9.8/10 and affects 7 out of the 10 key risk areas of the OWASP 2017 Top 10.

Remote Code Execution is considered a type of injection, which provides the capability to potentially launch a catastrophic attack. Through RCE an attacker can gain privileged access to the host server that might be running the unpatched and vulnerable version of this software. With elevated privileges the attacker could perform malicious activities including discovery of additional vulnerabilities in the system, checks for misconfigured file permissions on configuration files and even delete logs to wipe out the possibility of audit trails to their activities.

We also have often observed attackers exploit RCE vulnerabilities to deploy malware on the host, make it part of a DDoS Botnet attack or exfiltrate valuable data stored in the system.

Cloudflare’s continuously learning Firewall has you covered

At Cloudflare, we continuously strive to improve the security posture of our customers by quickly and seamlessly mitigating vulnerabilities of this nature. Protection against common RCE attacks is a standard feature of Cloudflare's Managed Rulesets. To provide coverage for this specific vulnerability, we have deployed a new rule within our Cloudflare Specials Rulesets (ruleId: 100166). Customers who have our Managed Rulesets and Cloudflare Specials enabled will be immediately protected against this vulnerability.

To check whether you have this protection enabled, please login, navigate to the Firewall tab and under the Managed Rulesets tab you will find the toggle to enable the WAF Managed Rulesets. See below:

Cloudflare’s protection against a new Remote Code Execution vulnerability (CVE-2019-16759) in vBulletin

Next, confirm that you have the Cloudflare Specials Rulesets enabled, by checking in the Managed Rulesets card as shown below:

Cloudflare’s protection against a new Remote Code Execution vulnerability (CVE-2019-16759) in vBulletin

Our customers who use our free services or those who don't have Cloudflare's Managed Rulesets turned on, can also protect themselves by deploying a patch on their own. The vBulletin team have released a security patch, the details of which can be found here.

Cloudflare’s Firewall is built on a network that continuously learns from our vast network spanning over 190 countries. In Q2’19 Cloudflare blocked an average of 44 billion cyber threats each day. Learn more about our simple, easy to use and powerful Cloudflare Firewall and protect your business today.

09:07

Saturday Morning Breakfast Cereal - Freudeity [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Actually, cognitive neuroethics says only the neurons involved in bad behavior should go to Hell.


Today's News:

Friday, 27 September

13:00

Birthday Week 2019 Wrap-up [The Cloudflare Blog]

Birthday Week 2019 Wrap-up
Birthday Week 2019 Wrap-up

This week we celebrated Cloudflare’s 9th birthday by launching a variety of new offerings that support our mission: to help build a better Internet.  Below is a summary recap of how we celebrated Birthday Week 2019.

Cleaning up bad bots

Every day Cloudflare protects over 20 million Internet properties from malicious bots, and this week you were invited to join in the fight!  Now you can enable “bot fight mode” in the Firewall settings of the Cloudflare Dashboard and we’ll start deploying CPU intensive code to traffic originating from malicious bots.  This wastes the bots’ CPU resources and makes it more difficult and costly for perpetrators to deploy malicious bots at scale. We’ll also share the IP addresses of malicious bot traffic with our Bandwidth Alliance partners, who can help kick malicious bots offline. Join us in the battle against bad bots – and, as you can read here – you can help the climate too!

Browser Insights

Speed matters, and if you manage a website or app, you want to make sure that you’re delivering a high performing website to all of your global end users. Now you can enable Browser Insights in the Speed section of the Cloudflare Dashboard to analyze website performance from the perspective of your users’ web browsers.  

WARP, the wait is over

Several months ago we announced WARP, a free mobile app purpose-built to address the security and performance challenges of the mobile Internet, while also respecting user privacy.  After months of testing and development, this week we (finally) rolled out WARP to approximately 2 million wait-list customers.  We also enabled WARP+, a WARP experience that uses Argo routing technology to route your mobile traffic across faster, less-congested, routes through the Internet.  WARP and WARP+ are now available in the iOS and Android App stores and we can’t wait for you to give it a try!

HTTP/3 Support

Last year we announced early support for QUIC, a UDP based protocol that aims to make everything on the Internet work faster, with built-in encryption. The IETF subsequently decided that QUIC should be the foundation of the next generation of the HTTP protocol, HTTP/3. This week, Cloudflare was the first to introduce support for HTTP/3 in partnership with Google Chrome and Mozilla.

Workers Sites

Finally, to wrap up our birthday week announcements, we announced Workers Sites. The Workers serverless platform continues to grow and evolve, and every day we discover new and innovative ways to help developers build and optimize their applications. Workers Sites enables developers to easily deploy lightweight static sites across Cloudflare’s global cloud platform without having to build out the traditional backend server infrastructure to support these sites.

We look forward to Birthday Week every year, as a chance to showcase some of our exciting new offerings — but we all know building a better Internet is about more than one week.  It’s an effort that takes place all year long, and requires the help of our partners, employees and especially you — our customers. Thank you for being a customer, providing valuable feedback and helping us stay focused on our mission to help build a better Internet.

Can’t get enough of this week’s announcements, or want to learn more? Register for next week’s Birthday Week Recap webinar to get the inside scoop on every announcement.

09:19

Saturday Morning Breakfast Cereal - Public Speaking [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
The nice thing is that the counting numbers are infinite, so you can just keep describing bigger and bigger cubes.


Today's News:

07:01

Workers Sites: Extending the Workers platform with our own serverless building blocks [The Cloudflare Blog]

Workers Sites: Extending the Workers platform with our own serverless building blocks

As of today, with the Wrangler CLI, you can now deploy entire websites directly to Cloudflare Workers and Workers KV. If you can statically generate the assets for your site, think create-react-app, Jekyll, or even the WP2Static plugin, you can deploy it to our entire global network, which spans 194 cities in more than 90 countries.

While you could deploy an entire site directly to Workers before, it wasn’t the easiest process. So, the Workers Developer Experience Team came up with a solution to make deploying static assets a significantly better experience.

Using our Workers command-line tool Wrangler, we've made it possible to deploy any static site to Workers in three easy steps: run wrangler init --site, configure the newly created wrangler.toml file with your account and project details, and then publish it to Cloudflare's edge with wrangler publish. If you want to explore how this works, check out our new Workers Sites tutorial for create-react-app, where we cover how this new functionality allows you to deploy without needing to write any additional code!

While in hindsight the path we took to get to this point might not seem the most straightforward, it really highlights the flexibility of the entire Workers platform to easily support use cases that we didn’t originally envision. With this in mind, I’ll walk you through the implementation and thinking we did to get to this point. I’ll also talk a bit about how the flexibility of the Workers platform has us excited, both for the ethos it represents, and the future it enables.

So, what went into building Workers Sites?

“Filesystem?! Where we’re going, we don’t need a filesystem!”

The Workers platform is built on v8 isolates, which, while awesome, lack a filesystem. If you’ve ever deployed a static site via FTP, uploaded it to object storage, or used a computer, you’d probably agree that filesystems are important. For many use cases, like building an API or routing, you don’t need a filesystem, but as the vision for Workers grew and our audience grew with it, it became clear to us that this was a limitation we needed to address for new features.

Welcome to the simulation

Without a filesystem, we decided to simulate one on top of Workers KV! Workers KV provides access to a secure key-value store that runs across Cloudflare’s Edge alongside Workers.

When running wrangler preview or wrangler publish, we check your wrangler.toml for the site key. The site key points to a bucket, which represents the KV namespace we’ll use to represent your static assets. We then upload each of your assets, where the path relative to the entry directory is the key, and the blob of the file is the value.

Workers Sites: Extending the Workers platform with our own serverless building blocks

When a request from a user comes in, the Worker reads the request’s URI and looks up the asset that matches the segment requested. For example, if a user fetches “my-site.com/about.html”, the Worker looks up the “about.html” key in KV and returns the blob. Behind the scenes, we’ll also detect the mime-type of the requested asset and return the response with the correct content-type headers.

For folks who are used to building static sites or sites with a static asset serving component, this could feel deeply overengineered. Others may argue that, indeed, this is just how filesystems are built! The interesting thing for us is that we had to build one, there wasn’t just one there waiting for us.

It was great that we could put this together with Workers KV, but we still had a problem…

Cache rules everything around me

Workers KV is a database, and so it's set up for both read and write operations. However, it's primarily tuned for read-heavy workloads on entries that don’t generally have a long life span. This works well for applications where data is accessed frequently and often updated. But, for static websites, assets are generally written once, and then they are never (or infrequently) written to again. Static site content should be cached for a very long time, if not forever (long live Space Jam). This means we need to cache data much longer than KV is used to.

To fix this, on publish or preview, Wrangler walks the entry-point directory you’ve declared in your wrangler.toml and creates an asset manifest: a map of your filenames to a hash of their content. We use this asset manifest to map requests for a particular filename, say index.html, to the content hash of the most recently uploaded static asset.

You may be familiar with the concept of an asset manifest from using tools like create-react-app. Asset manifests help maintain asset fingerprints for caching in the browser. We took this idea and implemented it in Workers Sites, so that we can leverage the edge cache as well!

Workers Sites: Extending the Workers platform with our own serverless building blocks

This now allows us to, after first read per location, cache the static assets in the Cloudflare cache so that the assets can be stored on the edge indefinitely. This reduces reads to KV to almost nothing; we want to use KV for durability purposes, but we want to use a longer caching strategy for performance. Let’s dive in to exactly what this looks like:

How it works

When a new asset is created, Wrangler publish will push the new asset to KV as well as an asset manifest to the edge alongside your Worker.

Workers Sites: Extending the Workers platform with our own serverless building blocks

When someone first accesses your page, the Cloudflare location closest to them will run your Worker. The Worker script will determine the content hash of the asset they’ve requested by looking up that asset in the asset manifest. It will use the filename and content hash as the key to fetch the asset’s contents from KV. At this time it will also insert the asset’s contents into Cloudflare’s edge cache, again keyed by filename and content hash. It will then respond to the request with the asset.

Workers Sites: Extending the Workers platform with our own serverless building blocks

On subsequent requests, the Worker script will look up the content hash in the asset manifest, and check the cache to see if the asset is there. Since this is a subsequent request, it will find your asset in the cache on the edge and return a response containing the asset without having to fetch the asset contents from KV.

Workers Sites: Extending the Workers platform with our own serverless building blocks

So what happens when you update your “index.html”- or any of your static assets? The process is very similar to what happens on the upload of a new asset. You’ll run wrangler publish with your new asset on your local machine. Wrangler will walk your asset directory and upload them to KV. At the same time, it will create a new asset manifest containing the filename and a content hash representing the new contents of the asset. When a request comes into your Worker, your Worker will look into the asset manifest and retrieve the new content hash for that asset. The Worker will look in the cache now for the new hash! It will then fetch the new asset from KV, populate the cache, and return the new file to your end user.

Edge caching happens per location across 194 cities around the world, ensuring that the most frequently accessed content on your page is cached in a location closest to those requesting content, reducing latency. All of this happens in *addition* to the browser cache, which means that your assets are nearly always incredibly close to end users!

By being on the edge, a Worker is in a unique position to be able to cache not only static assets like JS, CSS and images, but also HTML assets! Traditional static site solutions utilize your site’s HTML an entry point to the static site generator’s asset manifest. With this method of caching your HTML, it would be impossible to bust that cache because there is no other entry point to manage your assets’ fingerprints other than the HTML itself. However, in a Worker the entry point is your *Worker*! We can then leverage our wrangler asset-manifest to look up and fetch the accurate and cacheable HTML, while at the same time cache bust on content hash.

Making the possible imaginable

“What we have is a crisis of imagination. Albert Einstein said that you cannot solve a problem with the same mind-set that created it.” - Peter Buffett

When building a brand new developer platform, there’s often a vast number of possible applications. However, the sheer number of possibilities often make each one difficult to imagine. That’s why we think the most important part of any platform is its flexibility to adapt to previously unimagined use cases. And, we don’t mean that just for us. It’s important that everyone has the ability to customize the platform to new and interesting use cases!

At face value, the work we did to implement this feature might seem like another solution for a previously solved problem. However, it’s a great example of how a group of dedicated developers can improve the platform experience for others.

We hope that by paving a way to include static assets in a Worker, developers can use the extra cognitive space to conceive of even more new ways to use Workers that may have been hard to imagine before.

Workers Sites isn’t the end goal, but a stepping stone to continue to think critically about what it means to build a Web Application. We're excited to give developers the space to explore how simple static applications can grow and evolve, when combined with the dynamic power of edge computing.

Go forth and build something awesome!


Have you built something interesting with Workers? Let us know @CloudflareDev!

07:00

Workers Sites: Deploy Your Website Directly on our Network [The Cloudflare Blog]

Workers Sites: Deploy Your Website Directly on our Network
Workers Sites: Deploy Your Website Directly on our Network

Performance on the web has always been a battle against the speed of light — accessing a site from London that is served from Seattle, WA means every single asset request has to travel over seven thousand miles. The first breakthrough in the web performance battle was HTTP/1.1 connection keep-alive and browsers opening multiple connections. The next breakthrough was the CDN, bringing your static assets closer to your end users by caching them in data centers closer to them. Today, with Workers Sites, we’re excited to announce the next big breakthrough — entire sites distributed directly onto the edge of the Internet.

Deploying to the edge of the network

Why isn’t just caching assets sufficient? Yes, caching improves performance, but significant performance improvement comes with a series of headaches. The CDN can make a guess at which assets it should cache, but that is just a guess. Configuring your site for maximum performance has always been an error-prone process, requiring a wide collection of esoteric rules and headers. Even when perfectly configured, almost nothing is cached forever, precious requests still often need to travel all the way to your origin (wherever it may be). Cache invalidation is, after all, one of the hardest problems in computer science.

This begs the question: rather than moving bytes from the origin to the edge bit by bit clumsily, why not push the whole origin to the edge?

Workers Sites: Extending the Workers platform

Two years ago for Birthday Week, we announced Cloudflare Workers, a way for developers to write and run JavaScript and WebAssembly on our network in 194 cities around the world. A year later, we released Workers KV, our distributed key-value store that gave developers the ability to store state at the edge in those same cities.

Workers Sites leverages the power of Workers and Workers KV by allowing developers to upload their sites directly to the edge, and closer to the end users. Born on the edge, Workers Sites is what we think modern development on the web should look like, natively secure, fast, and massively scalable. Less of your time is spent on configuration, and more of your time is spent on your code, and content itself.

How it works

Workers Sites are deployed with a few terminal commands, and can serve a site generated by any static site generator, such as Hugo, Gatsby or Jekyll. Using Wrangler (our CLI), you can upload your site’s assets directly into KV. When a request hits your Workers Site, the Cloudflare Worker generated by Wrangler, will read and serve the asset from KV, with the appropriate headers (no need to worry about Content-Type, and Cache-Control; we’ve got you covered).

Workers Sites can be used to deploy any static site such as a blog, marketing sites, or portfolio.  If you ever decide your site needs to become a little less static, your Worker is just code, edit and extend it until you have a dynamic site running all around the world.

Getting started

To get started with Workers Sites, you first need to sign up for Workers. After selecting your workers.dev subdomain, choose the Workers Unlimited plan (starting at $5 / month) to get access to Workers KV and the ability to deploy Workers Sites.

After signing up for Workers Unlimited you’ll need to install the CLI for Workers, Wrangler. Wrangler can be installed either from NPM or Cargo:

# NPM Installation
npm i @cloudflare/wrangler -g
# Cargo Installation
cargo install wrangler

Once you install Wrangler, you are ready to deploy your static site, with the following steps:

  1. Run wrangler init --site in the directory that contains your static site's built assets
  2. Fill in the newly created wrangler.toml file with your account and project details
  3. Publish your site with wrangler publish

You can also check out our Workers Sites reference documentation or follow the full tutorial for create-react-app in the docs.

If you’d prefer to get started by watching a video, we’ve got you covered! This video will walk you through creating and deploying your first Workers Site:

Blazing fast: from Atlanta to Zagreb

In addition to improving the developer experience, we did a lot of work behind the scenes making sure that both deploys and the sites themselves are blazing fast — we’re excited to share the how with you in our technical blog post.

To test the performance of Workers Sites we took one of our personal sites and deployed it to run some benchmarks. This test was for our site but your results may vary.

One common way to benchmark the performance of your site using Google Lighthouse, which you can do directly from the Audits tab of your Chrome browser.

Workers Sites: Deploy Your Website Directly on our Network

So we passed the first test with flying colors — 100! However, running a benchmark from your own computer introduces a bias: your users are not necessarily where you are. In fact, your users are increasingly not where you are.

Where you’re benchmarking from is really important: running tests from different locations will yield different results. Benchmarking from Seattle and hitting a server on the West coast says very little about your global performance.

We decided to use a tool called Catchpoint to run benchmarks from cities around the world. To see how we compare, we deployed the site to three different static site deployment platforms including Workers Sites.

Since providers offer data center regions on the coasts of the United States, or central Europe, it’s common to see good performance in regions such as North America, and we’ve got you covered here:

Workers Sites: Deploy Your Website Directly on our Network

But what about your users in the rest of the world? Performance is even more critical in those regions: the first users are not going to be connecting to your site on a MacBook Pro, on a blazing fast connection. Workers Sites allows you to reach those regions without any additional effort on your part — every time our map grows, your global presence grows with it.

We’ve done the work of running some benchmarks from different parts of the world for you, and we’re pleased to share the results:

Workers Sites: Deploy Your Website Directly on our Network

One last thing...

Deploying your next site with Workers Sites is easy and leads to great performance, so we thought it was only right that we deploy with Workers Sites ourselves. With this announcement, we are also open sourcing the Cloudflare Workers docs! And, they are now served from a Cloudflare data center near you using Workers Sites.

We can’t wait to see what you deploy with Workers Sites!


Have you built something interesting with Workers or Workers Sites? Let us know @CloudflareDev!


02:00

How to contribute to Fedora [Fedora Magazine]

One of the great things about open source software projects is that users can make meaningful contributions. With a large project like Fedora, there’s somewhere for almost everyone to contribute. The hard part is finding the thing that appeals to you. This article covers a few of the ways people participate in the Fedora community every day.

The first step for contributing is to create an account in the Fedora Account System. After that, you can start finding areas to contribute. This article is not comprehensive. If you don’t see something you’re interested in, check out What Can I Do For Fedora or contact the Join Special Interest Group (SIG).

Software development

This seems like an obvious place to get started, but Fedora has an “upstream first” philosophy. That means most of the software that ends up on your computer doesn’t originate in the Fedora Project, but with other open source communities. Even when Fedora package maintainers write code to add a feature or fix a bug, they work with the community to get those patches into the upstream project.

Of course, there are some applications that are specific to Fedora. These are generally more about building and shipping operating systems than the applications that get shipped to the end users. The Fedora Infrastructure project on GitHub has several applications that help make Fedora happen.

Packaging applications

Once software is written, it doesn’t just magically end up in Fedora. Package maintainers are the ones who make that happen. Fundamentally, the job of the package maintainer is to make sure the application successfully builds into an RPM package and to generally keep up-to-date with upstream releases. Sometimes, that’s as simple as editing a line in the RPM spec file and uploading the new source code. Other times, it involves diagnosing build problems or adding patches to fix bugs or apply configuration settings.

Packagers are also often the first point of contact for user support. When something goes wrong with an application, the user (or ABRT) will file a bug in Red Hat Bugzilla. The Fedora package maintainer can help the user diagnose the problem and either fix it in the Fedora package or help file a bug in the upstream project’s issue tracker.

Writing

Documentation is a key part of the success of any open source project. Without documentation, users don’t know how to use the software, contributors don’t know how to submit code or run test suites, and administrators don’t know how to install and run the application. The Fedora Documentation team writes release notes, in-depth guides, and short “quick docs” that provide task-specific information. Multi-lingual contributors can also help with translation and localization of both the documentation and software strings by joining the localization (L10n) team.

Of course, Fedora Magazine is always looking for contributors to write articles. The Contributing page has more information. [We’re partial to this way of contributing! — ed.]

Testing

Fedora users have come to rely on our releases working well. While we emphasize being on the leading edge, we want to make sure releases are usable, too. The Fedora Quality Assurance team runs a broad set of test cases and ensures all of the release criteria are met before anything ships. Before each release, the team arranges test days for various components.

Once the release is out, testing continues. Each package update first goes to the updates-testing repository before being published to the main testing repository. This gives people who are willing to test the opportunity to try updates before they go to the wider community. 

Graphic design

One of the first things that people notice when they install a new Fedora release is the desktop background. In fact, using a new desktop background is one of our release criteria. The Fedora Design team produces several backgrounds for each release. In addition, they design stickers, logos, infographics, and many other visual elements for teams within Fedora. As you contribute, you may notice that you get awarded badges; the Badges team produces the art for those.

Helping others

Cooperative effort is a hallmark of open source communities. One of the best ways to contribute to any project is to help other users. In Fedora, that can mean answering questions on the Ask Fedora forum, the users mailing list, or in the #fedora IRC channel. Many third-party social media and news aggregator sites have discussion related to Fedora where you can help out as well.

Spreading the word

Why put so much effort into making something that no one knows about? Spreading the word helps our user and contributor communities grow. You can host a release party, speak at a conference, or share how you use Fedora on your blog or social media sites. The Fedora Mindshare committee has funds available to help with the costs of parties and other events.

Other contributions

This article only shared a few of the areas where you can contribute to Fedora. What Can I Do For Fedora has more options. If there’s something you don’t see, you can just start doing it. If others see the value, they can join in and help you. We look forward to your contributions!


Photo by Anunay Mahajan on Unsplash.

Thursday, 26 September

15:03

Hey ya’ll! This year’s Halloween bundle is now up in the... [Sarah's Scribbles]



Hey ya’ll! This year’s Halloween bundle is now up in the Scribbles Shop <3

07:58

Saturday Morning Breakfast Cereal - Together [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
'The sad thing is, even a war won't make everyone happy forever.'


Today's News:

Guess who did a crossover comic with PHD Comics?!

07:00

HTTP/3: the past, the present, and the future [The Cloudflare Blog]

HTTP/3: the past, the present, and the future

During last year’s Birthday Week we announced preliminary support for QUIC and HTTP/3 (or “HTTP over QUIC” as it was known back then), the new standard for the web, enabling faster, more reliable, and more secure connections to web endpoints like websites and APIs. We also let our customers join a waiting list to try QUIC and HTTP/3 as soon as they became available.

HTTP/3: the past, the present, and the future

Since then, we’ve been working with industry peers through the Internet Engineering Task Force, including Google Chrome and Mozilla Firefox, to iterate on the HTTP/3 and QUIC standards documents. In parallel with the standards maturing, we’ve also worked on improving support on our network.

We are now happy to announce that QUIC and HTTP/3 support is available on the Cloudflare edge network. We’re excited to be joined in this announcement by Google Chrome and Mozilla Firefox, two of the leading browser vendors and partners in our effort to make the web faster and more reliable for all.

In the words of Ryan Hamilton, Staff Software Engineer at Google, “HTTP/3 should make the web better for everyone. The Chrome and Cloudflare teams have worked together closely to bring HTTP/3 and QUIC from nascent standards to widely adopted technologies for improving the web. Strong partnership between industry leaders is what makes Internet standards innovations possible, and we look forward to our continued work together.”

What does this mean for you, a Cloudflare customer who uses our services and edge network to make your web presence faster and more secure? Once HTTP/3 support is enabled for your domain in the Cloudflare dashboard, your customers can interact with your websites and APIs using HTTP/3. We’ve been steadily inviting customers on our HTTP/3 waiting list to turn on the feature (so keep an eye out for an email from us), and in the coming weeks we’ll make the feature available to everyone.

What does this announcement mean if you’re a user of the Internet interacting with sites and APIs through a browser and other clients? Starting today, you can use Chrome Canary to interact with Cloudflare and other servers over HTTP/3. For those of you looking for a command line client, curl also provides support for HTTP/3. Instructions for using Chrome and curl with HTTP/3 follow later in this post.

The Chicken and the Egg

Standards innovation on the Internet has historically been difficult because of a chicken and egg problem: which needs to come first, server support (like Cloudflare, or other large sources of response data) or client support (like browsers, operating systems, etc)? Both sides of a connection need to support a new communications protocol for it to be any use at all.

Cloudflare has a long history of driving web standards forward, from HTTP/2 (the version of HTTP preceding HTTP/3), to TLS 1.3, to things like encrypted SNI. We’ve pushed standards forward by partnering with like-minded organizations who share in our desire to help build a better Internet. Our efforts to move HTTP/3 into the mainstream are no different.

Throughout the HTTP/3 standards development process, we’ve been working closely with industry partners to build and validate client HTTP/3 support compatible with our edge support. We’re thrilled to be joined by Google Chrome and curl, both of which can be used today to make requests to the Cloudflare edge over HTTP/3. Mozilla Firefox expects to ship support in a nightly release soon as well.

Bringing this all together: today is a good day for Internet users; widespread rollout of HTTP/3 will mean a faster web experience for all, and today’s support is a large step toward that.

More importantly, today is a good day for the Internet: Chrome, curl, and Cloudflare, and soon, Mozilla, rolling out experimental but functional, support for HTTP/3 in quick succession shows that the Internet standards creation process works. Coordinated by the Internet Engineering Task Force, industry partners, competitors, and other key stakeholders can come together to craft standards that benefit the entire Internet, not just the behemoths.

Eric Rescorla, CTO of Firefox, summed it up nicely: “Developing a new network protocol is hard, and getting it right requires everyone to work together. Over the past few years, we've been working with Cloudflare and other industry partners to test TLS 1.3 and now HTTP/3 and QUIC. Cloudflare's early server-side support for these protocols has helped us work the interoperability kinks out of our client-side Firefox implementation. We look forward to advancing the security and performance of the Internet together.”

HTTP/3: the past, the present, and the future

How did we get here?

Before we dive deeper into HTTP/3, let’s have a quick look at the evolution of HTTP over the years in order to better understand why HTTP/3 is needed.

It all started back in 1996 with the publication of the HTTP/1.0 specification which defined the basic HTTP textual wire format as we know it today (for the purposes of this post I’m pretending HTTP/0.9 never existed). In HTTP/1.0 a new TCP connection is created for each request/response exchange between clients and servers, meaning that all requests incur a latency penalty as the TCP and TLS handshakes are completed before each request.

HTTP/3: the past, the present, and the future

Worse still, rather than sending all outstanding data as fast as possible once the connection is established, TCP enforces a warm-up period called “slow start”, which allows the TCP congestion control algorithm to determine the amount of data that can be in flight at any given moment before congestion on the network path occurs, and avoid flooding the network with packets it can’t handle. But because new connections have to go through the slow start process, they can’t use all of the network bandwidth available immediately.

The HTTP/1.1 revision of the HTTP specification tried to solve these problems a few years later by introducing the concept of “keep-alive” connections, that allow clients to reuse TCP connections, and thus amortize the cost of the initial connection establishment and slow start across multiple requests. But this was no silver bullet: while multiple requests could share the same connection, they still had to be serialized one after the other, so a client and server could only execute a single request/response exchange at any given time for each connection.

As the web evolved, browsers found themselves needing more and more concurrency when fetching and rendering web pages as the number of resources (CSS, JavaScript, images, …) required by each web site increased over the years. But since HTTP/1.1 only allowed clients to do one HTTP request/response exchange at a time, the only way to gain concurrency at the network layer was to use multiple TCP connections to the same origin in parallel, thus losing most of the benefits of keep-alive connections. While connections would still be reused to a certain (but lesser) extent, we were back at square one.

Finally, more than a decade later, came SPDY and then HTTP/2, which, among other things, introduced the concept of HTTP “streams”: an abstraction that allows HTTP implementations to concurrently multiplex different HTTP exchanges onto the same TCP connection, allowing browsers to more efficiently reuse TCP connections.

HTTP/3: the past, the present, and the future

But, yet again, this was no silver bullet! HTTP/2 solves the original problem — inefficient use of a single TCP connection — since multiple requests/responses can now be transmitted over the same connection at the same time. However, all requests and responses are equally affected by packet loss (e.g. due to network congestion), even if the data that is lost only concerns a single request. This is because while the HTTP/2 layer can segregate different HTTP exchanges on separate streams, TCP has no knowledge of this abstraction, and all it sees is a stream of bytes with no particular meaning.

The role of TCP is to deliver the entire stream of bytes, in the correct order, from one endpoint to the other. When a TCP packet carrying some of those bytes is lost on the network path, it creates a gap in the stream and TCP needs to fill it by resending the affected packet when the loss is detected. While doing so, none of the successfully delivered bytes that follow the lost ones can be delivered to the application, even if they were not themselves lost and belong to a completely independent HTTP request. So they end up getting unnecessarily delayed as TCP cannot know whether the application would be able to process them without the missing bits. This problem is known as “head-of-line blocking”.

Enter HTTP/3

This is where HTTP/3 comes into play: instead of using TCP as the transport layer for the session, it uses QUIC, a new Internet transport protocol, which, among other things, introduces streams as first-class citizens at the transport layer. QUIC streams share the same QUIC connection, so no additional handshakes and slow starts are required to create new ones, but QUIC streams are delivered independently such that in most cases packet loss affecting one stream doesn't affect others. This is possible because QUIC packets are encapsulated on top of UDP datagrams.

Using UDP allows much more flexibility compared to TCP, and enables QUIC implementations to live fully in user-space — updates to the protocol’s implementations are not tied to operating systems updates as is the case with TCP. With QUIC, HTTP-level streams can be simply mapped on top of QUIC streams to get all the benefits of HTTP/2 without the head-of-line blocking.

QUIC also combines the typical 3-way TCP handshake with TLS 1.3's handshake. Combining these steps means that encryption and authentication are provided by default, and also enables faster connection establishment. In other words, even when a new QUIC connection is required for the initial request in an HTTP session, the latency incurred before data starts flowing is lower than that of TCP with TLS.

HTTP/3: the past, the present, and the future

But why not just use HTTP/2 on top of QUIC, instead of creating a whole new HTTP revision? After all, HTTP/2 also offers the stream multiplexing feature. As it turns out, it’s somewhat more complicated than that.

While it’s true that some of the HTTP/2 features can be mapped on top of QUIC very easily, that’s not true for all of them. One in particular, HTTP/2’s header compression scheme called HPACK, heavily depends on the order in which different HTTP requests and responses are delivered to the endpoints. QUIC enforces delivery order of bytes within single streams, but does not guarantee ordering among different streams.

This behavior required the creation of a new HTTP header compression scheme, called QPACK, which fixes the problem but requires changes to the HTTP mapping. In addition, some of the features offered by HTTP/2 (like per-stream flow control) are already offered by QUIC itself, so they were dropped from HTTP/3 in order to remove unnecessary complexity from the protocol.

HTTP/3, powered by a delicious quiche

QUIC and HTTP/3 are very exciting standards, promising to address many of the shortcomings of previous standards and ushering in a new era of performance on the web. So how do we go from exciting standards documents to working implementation?

Cloudflare's QUIC and HTTP/3 support is powered by quiche, our own open-source implementation written in Rust.

HTTP/3: the past, the present, and the future

You can find it on GitHub at github.com/cloudflare/quiche.

We announced quiche a few months ago and since then have added support for the HTTP/3 protocol, on top of the existing QUIC support. We have designed quiche in such a way that it can now be used to implement HTTP/3 clients and servers or just plain QUIC ones.

How do I enable HTTP/3 for my domain?

As mentioned above, we have started on-boarding customers that signed up for the waiting list. If you are on the waiting list and have received an email from us communicating that you can now enable the feature for your websites, you can simply go to the Cloudflare dashboard and flip the switch from the "Network" tab manually:

HTTP/3: the past, the present, and the future

We expect to make the HTTP/3 feature available to all customers in the near future.

Once enabled, you can experiment with HTTP/3 in a number of ways:

Using Google Chrome as an HTTP/3 client

In order to use the Chrome browser to connect to your website over HTTP/3, you first need to download and install the latest Canary build. Then all you need to do to enable HTTP/3 support is starting Chrome Canary with the “--enable-quic” and “--quic-version=h3-23” command-line arguments.

Once Chrome is started with the required arguments, you can just type your domain in the address bar, and see it loaded over HTTP/3 (you can use the Network tab in Chrome’s Developer Tools to check what protocol version was used). Note that due to how HTTP/3 is negotiated between the browser and the server, HTTP/3 might not be used for the first few connections to the domain, so you should try to reload the page a few times.

If this seems too complicated, don’t worry, as the HTTP/3 support in Chrome will become more stable as time goes on, enabling HTTP/3 will become easier.

This is what the Network tab in the Developer Tools shows when browsing this very blog over HTTP/3:

HTTP/3: the past, the present, and the future

Note that due to the experimental nature of the HTTP/3 support in Chrome, the protocol is actually identified as “http2+quic/99” in Developer Tools, but don’t let that fool you, it is indeed HTTP/3.

Using curl

The curl command-line tool also supports HTTP/3 as an experimental feature. You’ll need to download the latest version from git and follow the instructions on how to enable HTTP/3 support.

If you're running macOS, we've also made it easy to install an HTTP/3 equipped version of curl via Homebrew:

 % brew install --HEAD -s https://raw.githubusercontent.com/cloudflare/homebrew-cloudflare/master/curl.rb

In order to perform an HTTP/3 request all you need is to add the “--http3” command-line flag to a normal curl command:

 % ./curl -I https://blog.cloudflare.com/ --http3
HTTP/3 200
date: Tue, 17 Sep 2019 12:27:07 GMT
content-type: text/html; charset=utf-8
set-cookie: __cfduid=d3fc7b95edd40bc69c7d894d296564df31568723227; expires=Wed, 16-Sep-20 12:27:07 GMT; path=/; domain=.blog.cloudflare.com; HttpOnly; Secure
x-powered-by: Express
cache-control: public, max-age=60
vary: Accept-Encoding
cf-cache-status: HIT
age: 57
expires: Tue, 17 Sep 2019 12:28:07 GMT
alt-svc: h3-23=":443"; ma=86400
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
server: cloudflare
cf-ray: 517b128df871bfe3-MAN

Using quiche’s http3-client

Finally, we also provide an example HTTP/3 command-line client (as well as a command-line server) built on top of quiche, that you can use to experiment with HTTP/3.

To get it running, first clone quiche’s GitHub repository:

$ git clone --recursive https://github.com/cloudflare/quiche

Then build it. You need a working Rust and Cargo installation for this to work (we recommend using rustup to easily setup a working Rust development environment).

$ cargo build --examples

And finally you can execute an HTTP/3 request:

$ RUST_LOG=info target/debug/examples/http3-client https://blog.cloudflare.com/

What’s next?

In the coming months we’ll be working on improving and optimizing our QUIC and HTTP/3 implementation, and will eventually allow everyone to enable this new feature without having to go through a waiting list. We'll continue updating our implementation as standards evolve, which may result in breaking changes between draft versions of the standards.

Here are a few new features on our roadmap that we're particularly excited about:

Connection migration

One important feature that QUIC enables is seamless and transparent migration of connections between different networks (such as your home WiFi network and your carrier’s mobile network as you leave for work in the morning) without requiring a whole new connection to be created.

HTTP/3: the past, the present, and the future

This feature will require some additional changes to our infrastructure, but it’s something we are excited to offer our customers in the future.

Zero Round Trip Time Resumption

Just like TLS 1.3, QUIC supports a mode of operation that allows clients to start sending HTTP requests before the connection handshake has completed. We don’t yet support this feature in our QUIC deployment, but we’ll be working on making it available, just like we already do for our TLS 1.3 support.

HTTP/3: it's alive!

We are excited to support HTTP/3 and allow our customers to experiment with it while efforts to standardize QUIC and HTTP/3 are still ongoing. We'll continue working alongside other organizations, including Google and Mozilla, to finalize the QUIC and HTTP/3 standards and encourage broad adoption.

Here's to a faster, more reliable, more secure web experience for all.

Wednesday, 25 September

15:06

Apotheosis: A GCP Privilege Escalation Tool [Code as Craft]

The Principle of Least Privilege

One of the most fundamental principles of information security is the principle of least privilege. This principle states that users should only be given the minimal permissions necessary to do their job. A corollary of the principle of least privilege is that users should only have those privileges while they are actively using them. For especially sensitive actions, users should be able to elevate their privileges within established policies, take sensitive actions, and then return their privilege level to normal to resume normal usage patterns. This is sometimes called privilege bracketing when applied to software, but it’s also useful for human users.

Following this principle reduces the chance of accidental destructive actions due to typos or misunderstandings. It may also provide some protection in case the user’s credentials are stolen, or if the user is tricked into running malicious code. Furthermore, it can be used as a notice to perform additional logging or monitoring of user actions.

In Unix this takes the form of the su command, which allows authorized users to elevate their privileges, take some sensitive actions, and then reduce their permissions. The sudo command is an even more fine-grained approach with the same purpose, as it will elevate privileges for a single command. 

Some cloud providers have features that allow for temporary escalation of privileges. Authorized users can take actions with a role other than the one which is normally assigned to them. The credentials used to assume a role are temporary, so they will expire after a specified amount of time. However, we did not find a built-in solution to achieve the same functionality in Google Cloud Platform (GCP).

Enter Apotheosis

Apotheosis is a tool that is meant to address the issues above. The word apotheosis means the elevation of someone to divine status. It’s possible, and convenient, to give users permanent “godlike” permissions, but this is a violation of the principle of least privilege. This tool will allow us to “apotheosize” users, and then return them to a “mortal” level of privilege when their job duties no longer require additional privileges.

Users or groups can be given “actual permissions” and “eligible permissions”. For example, a user who currently has the owner role may instead be given only the viewer role, and we will call that their “actual permissions”. Then we can give them “eligible permissions” of owner, which will come in the form of the service account token creator role on a service account with the editor or organization admin role.

For this user to elevate their privileges, the Apotheosis command line program will use their GCP credentials to call the REST API to create a short-lived service account token. Then, using that token, Apotheosis will make another REST API call which will grant the requested permissions to the user. Or, alternatively, the permissions may be granted to a specified third party, allowing the Apotheosis user to leverage their eligible permissions to grant actual permissions to another entity. The program will wait for a specified amount of time, remove the requested permissions, and then delete the short-lived service account token.

This process has the following advantages:

  • It requires no additional access controls or centralized server actions. There is no possibility of compromising the program since it is local and only capable of escalating to the level of privilege which users are already allowed in the GCP Identity and Access Management (IAM) configuration. 
  • The user is only required to enter one command in their terminal. It looks like this:apotheosis -m user:someuser@etsy.com -r roles/editor  -d 600 --resource some-project. Or to use the defaults, just apotheosis.
  • Any permissions will be granted by the designated service account. This allows for logging that service account’s IAM activity, and alerting on any troubling events in regards to that activity. 

Future Additions

Some additional features which may be added to Apotheosis are contingent on the launch of other features, such as conditional IAM. Conditional IAM will allow the use of temporal restrictions on IAM grants, which will make Apotheosis more reliable. With conditional IAM, if Apotheosis is interrupted and does not revoke the granted permissions, they will expire anyway.

The ability to allow restricted permissions granting will be a useful IAM feature as well. Right now a user or service account can be given a role like editor or organization admin, and then can grant any other role in existence. But if it were possible to allow granting a predefined list of roles, that would make Apotheosis useful for a larger set of users. As it is now, Apotheosis is useful for users who have the highest level of eligible privilege, since their access to the Apotheosis service account gives them all the privileges of that service account. That is, the scope of those privileges can be limited to a particular project, folder, or organization, but cannot be restricted to a limited set of actions. At the moment that service account must have one of the few permissions which grant the ability to assign any role to any user. 

Requiring two-factor authentication when using the short-lived service account token feature on a particular service account would be another useful feature. This would require an Apotheosis user to re-authenticate with another factor when escalating privileges.

Open Source

Apotheosis is open source and can be found on Github.

09:06

Saturday Morning Breakfast Cereal - Fairy Tales [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I mean what's wrong with just having a nursery rhyme about the distribution of wool?


Today's News:

Hey geeks, I'll be at NYCC to promote the new book. And, Friday night, October 4, the nerdiest event of the year will happen.