Thursday, 19 September

17:28

Node.js Brought To BeOS-Inspired Haiku Open-Source OS [Phoronix]

Haiku as the open-source operating system that still maintains BeOS compatibility continues tacking on modern features and support for software well past the days of BeOS...

17:26

If you're using Harbor as your container registry, bear in mind it can be hijacked with has_admin_role = True [The Register]

Patch now before miscreants sail off with your apps, data

Video  IT departments using the Harbor container registry will want to update the software ASAP, following Thursday's disclosure of a bug that can be exploited by users to gain administrator privileges.…

17:20

Vaping Criminal Probe Announced By FDA As Illnesses Rise To 530 [Slashdot]

The FDA has revealed a criminal investigation into the outbreak of vaping-related lung illnesses, which have risen to 530 across 38 states, according to the CDC. The Washington Post says there have been seven confirmed deaths from these illnesses so far. CNET reports: The FDA reportedly said it isn't seeking prosecution for ill people who've vaped cannabis and come forward with information. "The focus is on the supply chain," Mitch Zeller, director of the FDA's Center for Tobacco Products, told the Post. "We're very alarmed about products containing THC." Suspicion has recently turned to chemical dilutants, or "cutting agents," found in some black market THC vaping oils. The FDA has collected more than 150 samples from patients across the country and is now analyzing them for the presence of cutting agents and other substances. According to the CDC, more than half the patients are under 25, with two-thirds between 18 and 34, and 16% under 18.

Read more of this story at Slashdot.

16:40

Facebook campus death plunge: Cops say man jumped from 4th floor in apparent suicide [The Register]

Foul play ruled out at Menlo Park headquarters

A Facebook employee died at the tech giant's Silicon Valley headquarters today in an apparent suicide.…

16:40

YouTube Creators May Lose Verified Badges As Verification Process Becomes Stricter [Slashdot]

YouTube is rolling out changes to its verification program for creators, making it tougher for growing channels to earn a checkmark beside their name and removing verification badges from people who don't meet the heightened criteria. An anonymous Slashdot reader shares a report from The Verge: YouTube's current system allows anyone with more than 100,000 subscribers to be verified. Now, YouTube is emphasizing verifying prominent channels that have a "clear need for proof of authenticity," according to the company. This includes traditional YouTubers, musicians, comedians, and artists, among others. Verification is an extremely important feature for creators. It affects which creators get top recommendations when people search for something on YouTube. Channels that no longer meet the criteria and may have their badge removed will be notified today, YouTube confirmed to The Verge. Creators will have the option to appeal the decision before the change takes place in late October. The criteria for verification due to prominence essentially looks at whether a creator or channel is recognizable enough both in and outside of YouTube that the company needs to authenticate them. The company's authenticity rules are pretty simple: a channel has to be owned and operated by the person or company it claims to be in order to get a checkmark or other verification mark. For example, Beyonce's official channel should get a new artist profile icon and a musical note beside her name to show people that the page belongs to the real Beyonce. Under the new policy, YouTube's team will handle verification on their end, according to a press release. Channels that meet the new requirements don't have to apply for verification as it will automatically be handed out.

Read more of this story at Slashdot.

16:08

FedEx execs: We had no idea cyberattack would be so bad. Investors: Is that why you sold $40m+ of your own shares? [The Register]

Shareholders NotHappy stock offloaded in NotPetya aftermath

FedEx execs not only hid the impact of the NotPetya ransomware on their business but personally profited by selling off tens of millions of dollars of their own shares before the truth came out, a lawsuit filed by the delivery business’ own shareholders claims.…

16:03

Alphabet Partners With FedEx, Walgreens To Bring Drone Delivery To the US [Slashdot]

Google's Wing drone-delivery company announced today that it would be partnering with FedEx and Walgreens to bring autonomous drone deliveries to the U.S. in October. "The pilot program will be launched in Christiansburg, Virginia, one of the two areas in the state that Wing has been testing its drone technology for years," reports Quartz. From the report: People expecting packages from FedEx will be able to choose to get their deliveries made via drone, assuming that they live in certain areas that Wing has designated it can safely deliver parcels in. Similarly, Walgreens customers will be able to order products, such as non-prescription medicine, and have them delivered by drone. Walgreens said in a release that 78% of the U.S. population lives within 5 miles of one of its stores. Wing said that its drones can currently make a round-trip flight of about 6 miles (9.7 km), traveling about 60 miles per hour (97 km per hour), and can carry around 3 lbs (1.4 kg) of payload. The company also said that it would be offering deliveries from a local Virginia retailer, Sugar Magnolia. Wing won't be charging for the delivery service itself during the trial. Wing said on a call with journalists that it will soon be reaching out to members of the Christiansburg community to let them know if they will be able to accept deliveries. Wing's drones don't actually land on the ground when they make deliveries; instead, they hover about 23 ft (7 m) off the ground, lowering their packages down through a winch cable system. If anything happens to snag the cable as it's delivering a package, the drone can sense the tension in the cord and release it, hopefully flying away without incident. It still requires what it calls safe delivery zones, like a backyard or a front pathway outside a house, to be able to make a delivery.

Read more of this story at Slashdot.

15:25

Google Makes the Largest Ever Corporate Purchase of Renewable Energy [Slashdot]

Two years ago, Google became the first company of its size to buy as much renewable electricity as the electricity it used. But as the company grows, so does its demand for power. To stay ahead of that demand, Google just made the largest corporate renewable energy purchase in history, with 18 new energy deals around the world that will help build infrastructure worth more than $2 billion. From a report: The projects include massive new solar farms in places like Texas and North Carolina where the company has data centers. "Bringing incremental renewable energy to the grids where we consume energy is a critical component of pursuing 24x7 carbon-free energy for all of our operations," Google CEO Sundar Pichai wrote in a blog post today. While most of the renewable energy the company has purchased in the past has come from wind farms, the dropping cost of solar power means that several of the new deals are solar plants. In Chile, a new project combines both wind and solar power, making it possible to generate clean energy for longer each day.

Read more of this story at Slashdot.

15:17

Linux 5.4 DRM Pull Submitted With AMD Navi 12/14, Arcturus & Renoir Plus Intel Tigerlake [Phoronix]

While we've known about the many features for a while if you are a faithful Phoronix reader, today the Direct Rendering Manager (DRM) graphics driver changes were sent in for the Linux 5.4 kernel...

14:45

AT&T Says Customers Can't Sue the Company For Selling Location Data To Bounty Hunters [Slashdot]

An anonymous reader quotes a report from Motherboard: AT&T is arguing that its customers can't sue the company for selling location data to bounty hunters, according to recently filed court records. AT&T says the customers signed contracts that force them into arbitration, meaning consumers have to settle complaints privately with the company rather than in court. The filing is in response to a lawsuit filed by the Electronic Frontier Foundation (EFF). The issue circles around mandatory arbitration; that is, forcing consumers to settle complaints privately with the company rather than in court. "Each time they entered into a new Wireless Customer Agreement with AT&T, they [the plaintiffs] not only agreed to AT&T's Privacy Policy but also agreed to resolve their disputes with AT&T -- including the claims asserted in this action -- in arbitration on an individual basis," AT&T's filing from last week reads. When the plaintiffs, who are AT&T customers, accepted AT&T's terms and conditions when, say, purchasing a new phone, they also agreed specifically to the arbitration clause, AT&T argues. The Arbitration Agreement on AT&T's website reads, "AT&T and you agree to arbitrate all disputes and claims between us. This agreement to arbitrate is intended to be broadly interpreted." The class-action lawsuit comes after multiple investigations found that T-Mobile, Sprint, and AT&T were selling access to their customers' location data to bounty hunters and others not authorized to possess it. All of the telecom giants have since stopped selling the data, but that hasn't stopped lawyers from filing class-action lawsuits.

Read more of this story at Slashdot.

14:09

Call-center scammer loses $9m appeal in stunning moment of poetic justice [The Register]

But I only expected to pay $250,000, wails scumbag to wall of blank faces

A call-center scammer has lost his appeal to overturn a $9m fine – after a court pointed out the crook had specifically waived the right to appeal when he pleaded guilty.…

14:06

North America Has Lost 3 Billion Birds in 50 Years [Slashdot]

Slowly, steadily and almost imperceptibly, North America's bird population is dwindling. From a report: The sparrows and finches that visit backyard feeders number fewer each year. The flutelike song of the western meadowlark -- the official bird of six U.S. states -- is growing more rare. The continent has lost nearly 3 billion birds representing hundreds of species over the past five decades, in an enormous loss that signals an "overlooked biodiversity crisis," according to a study from top ornithologists and government agencies. This is not an extinction crisis -- yet. It is a more insidious decline in abundance as humans dramatically alter the landscape: There are 29 percent fewer birds in the United States and Canada today than in 1970, the study concludes. Grassland species have been hardest hit, probably because of agricultural intensification that has engulfed habitats and spread pesticides that kill the insects many birds eat. But the victims include warblers, thrushes, swallows and other familiar birds. "That's really what was so staggering about this," said lead author Ken Rosenberg, a senior scientist at the Cornell Lab of Ornithology and American Bird Conservancy. "The generalist, adaptable, so-called common species were not compensating for the losses, and in fact they were experiencing losses themselves. This major loss was pervasive across all the bird groups."

Read more of this story at Slashdot.

13:30

Apple's iOS 13 Just Launched But iOS 13.1, iPadOS Arrive Next Week [Slashdot]

Apple's latest iPhone software, iOS 13, is now available -- but on Tuesday, you'll already be able to download the first update, iOS 13.1. And you'll be able to revitalize your iPad with Apple's software created for its tablets. From a report: Apple may be best known for its hardware, but it's really the seamless integration of its devices with its software that's set it apart from rivals. The company's ability to control every aspect of its products -- something that began when Steve Jobs and Steve Wozniak founded Apple in 1976 -- has been key in making Apple the most powerful company in tech. The company's mobile software, iOS, gets revamped every year and launches when its latest phones hit the market. Starting Tuesday, you'll also be able to download the first update to the software, as well as the new iPadOS software tailored for Apple's tablets. iOS 13 brings a dedicated dark mode, a new swipe keyboard and a revamped Photos app (complete with video editing tools). iOS 13.1 will bring bug fixes and will let you share your ETA with friends and family members through Apple Maps. Siri shortcuts can be added to automations, and you can set up triggers to run any shortcut automatically.

Read more of this story at Slashdot.

13:00

Live in-depth interviews with tech makers and shakers in the heart of Silicon Valley? Why yes, it's The Next I/O Platform [The Register]

Dive deep into networking and storage next week with our awesome sister site, The Next Platform

Event  Join the editors of our sister site The Next Platform for The Next I/O Platform conference in San Jose on September 24.…

12:50

How the Internet Archive is Waging War on Misinformation [Slashdot]

San Francisco-based non-profit is archiving billions of web pages in a bid to preserve web history. From a report: Since the 2016 US election, as fears about the power of fake news have intensified, the archive has stepped up its efforts to combat misinformation. At a time when false and ultra-partisan content is rapidly created and spread, and social media pages are constantly updated, the importance of having an unalterable record of who said what, when has been magnified. "We're trying to put in a layer of accountability," said founder Brewster Kahle. Mr Kahle founded the archive, which now employs more than 100 staff and costs $18m a year to run, because he feared that what was appearing on the internet was not being saved and catalogued in the same way as newspapers and books. The organisation is funded through donations, grants and the fees it charges third parties that request specific digitisation services. So far, the archive has catalogued 330bn web pages, 20m books and texts, 8.5m audio and video recordings, 3m images and 200,000 software programs. The most popular, public websites are prioritised, as are those that are commonly linked to. Some information is free to access, some is loaned out (if copyright laws apply) and some is only available to researchers. Curled up in a chair in his office after lunch, Mr Kahle lamented the combined impact of misinformation and how difficult it can be for ordinary people to access reliable sources of facts. "We're bringing up a generation that turns to their screens, without a library of information accessible via screens," said Mr Kahle. Some have taken advantage of this "new information system", he argued -- and the result is "Trump and Brexit." Having a free online library is crucial, said Mr Kahle, since "[the public is] just learning from whateverâ...âis easily available."

Read more of this story at Slashdot.

12:10

Downloading Stays Legal, No Site Blocking, Swiss Copyright Law Says [Slashdot]

From a report: Switzerland's National Council has passed amendments aimed at modernizing the country's copyright law to make it more fit for the digital age. While services that host pirate sites or distribute content can expect a tougher ride moving forward, users will still be able to download pirate content for personal use. Furthermore, Swiss Internet service providers will not be required to prevent their customers accessing pirate sites.

Read more of this story at Slashdot.

12:05

LLVM 9.0 Released With Ability To Build The Linux x86_64 Kernel, Experimental OpenCL C++ [Phoronix]

It's coming almost one month behind schedule, but LLVM 9.0 is out today along with the Clang 9.0 C/C++ compiler and associated sub-projects for this open-source compiler infrastructure...

11:30

Amazon's 'Climate Pledge' Commits To Net Zero Carbon Emissions By 2040 and 100% Renewables by 2030 [Slashdot]

In Washington today, Amazon announced a series of initiatives and issued call for companies to reduce their carbon emissions ten years ahead of the goals set forth in the Paris Agreement as part of sweeping effort to reduce its own environmental footprint. From a report: "We're done being in the middle of the herd on this issue -- we've decided to use our size and scale to make a difference," said Jeff Bezos, Amazon founder and chief executive, in a statement. "If a company with as much physical infrastructure as Amazon -- which delivers more than 10 billion items a year -- can meet the Paris Agreement 10 years early, then any company can." Bezos' statement comes as employees at his own company and others across the tech industry plan for a walkout on Friday to protest inaction on climate change from their employers. Amazon's initiatives include an order for 100,000 delivery vehicles to Rivian, a company in which Amazon has previously invested $440 million.

Read more of this story at Slashdot.

11:15

Google engineering boss sues web giant over sex discrim: I was paid less than men, snubbed for promotion [The Register]

Filing alleges less-qualified blokes given all the jobs, too

A technical director is suing Google for allegedly paying her less than male counterparts and promoting less-qualified men to positions for which she was more skilled.…

10:50

Philippines Declares New Polio Outbreak After 19 Years [Slashdot]

twocows writes: Philippine health officials declared a polio outbreak in the country on Thursday, nearly two decades after the World Health Organization declared it to be free of the highly contagious and potentially deadly disease. Health Secretary Francisco Duque III said at a news conference that authorities have confirmed at least one case of polio in a 3-year-old girl in southern Lanao del Sur province and detected the polio virus in sewage in Manila and in waterways in the southern Davao region. Those findings are enough to declare an outbreak of the crippling disease in a previously polio-free country like the Philippines, he said. The World Health Organization and the United Nations Children's Fund expressed deep concern over polio's reemergence in the country and said they would support the government in immunizing children, who are the most susceptible, and strengthening surveillance. "As long as one single child remains infected, children across the country and even beyond are at risk of contracting polio," UNICEF Philippines representative Oyun Dendevnorov said. WHO and UNICEF said in a joint statement the polio outbreak in the Philippines is concerning because it is caused by vaccine-derived poliovirus type 2.

Read more of this story at Slashdot.

10:40

German ministry hellbent on taking back control of 'digital sovereignty', cutting dependency on Microsoft [The Register]

'Pain points' include data collection, lock-in and uncontrollable costs

The Federal Ministry of the Interior (Bundesministerium des Innern or BMI) in Germany says it will reduce reliance on specific IT suppliers, especially Microsoft, in order to strengthen its "digital sovereignty".…

10:10

Hard luck, Claranet. You managed to go 29 whole days without an incident [The Register]

Five hours and counting, this does not look good for UK hoster

Brit hosting provider Claranet found itself resetting the "29 days without incident" sign this morning as "connectivity issues" felled customer emails and websites all over again.…

10:10

India Tells Tech Firms To Protect User Privacy, Prevent Abuse [Slashdot]

Technology firms must protect user privacy and prevent abuse of their platforms, India's IT minister said on Thursday, speaking as the government draws up a data privacy law and seeks to push companies to store more data locally. From a report: Federal Information and Technology Minister Ravi Shankar Prasad said he wanted Indians to have access to more technology platforms but said this should not undermine user privacy. "I have only one caveat -- it must be safe and secure, it must safeguard the privacy rights of the individual and you must make extra efforts that people don't abuse the system," Prasad told industry executives at a gathering organized by Alphabet's Google in New Delhi. India's 1.3 billion people and their massive consumption of mobile data has turned it into a key growth market for U.S. technology giants such as Google, Facebook and Amazon. India has already forced foreign payment firms such as Mastercard and Visa to store data locally.

Read more of this story at Slashdot.

09:44

KDE Plasma 5.17 Beta Rolls Out With Wayland Improvements, Overhauled Settings [Phoronix]

The beta release is out today for KDE Plasma 5.17...

09:37

Five NHS trusts do DeepMind data deal with Google. One says no [The Register]

Delicious data hoard handed over from UK contracts

Five National Health Trusts have signed up to transfer their existing data deals with DeepMind to its parent company Google, but one has refused.…

09:30

Huawei's Flagship Mate 30 Pro Has Impressive Specs But No Google [Slashdot]

The Mate 30 series of smartphones from Huawei is now official, starting with the Mate 30 Pro and the Mate 30. From a report: The announcement of Mate 30 series comes at a difficult time for Huawei, whose presence on the USA's entity list prevents US companies from doing business with the Chinese firm. Google said last month that these phones won't ship with Google's apps and services, nor will they come with the Play Store pre-installed, which is how most Android users outside of China download their apps. Huawei's response to the problem has been to nurture its own ecosystem of apps that are available through the Huawei App Gallery. The company announced that rather than shipping with Google's services pre-installed, the Mate 30 Series would instead ship with the Huawei Mobile Services (HMS) Core, which it claims is already integrated with over 45,000 apps. The company announced that it was investing $1 billion into its software ecosystem with an investment that would be split across a development fund, a user growth fund, and a marketing fund. Here's what happens when you attempt to sideload an app developed by Google.

Read more of this story at Slashdot.

09:00

You better get a wiggle on then: BT said to be mulling switching off UK's copper internets by 2027 [The Register]

Would that be the 'secret talks' Openreach is consulting on?

BT is considering moving its entire network to full fibre and will decommission its copper cables by 2027, according to reports.…

08:50

Solar and Wind Power So Cheap They're Outgrowing Subsidies [Slashdot]

For years, wind and solar power were derided as boondoggles. They were too expensive, the argument went, to build without government handouts. Today, renewable energy is so cheap that the handouts they once needed are disappearing. From a report: On sun-drenched fields across Spain and Italy, developers are building solar farms without subsidies or tax-breaks, betting they can profit without them. In China, the government plans to stop financially supporting new wind farms. And in the U.S., developers are signing shorter sales contracts, opting to depend on competitive markets for revenue once the agreements expire. The developments have profound implications for the push to phase out fossil fuels and slow the onset of climate change. Electricity generation and heating account for 25% of global greenhouse gases. As wind and solar demonstrate they can compete on their own against coal- and natural gas-fired plants, the economic and political arguments in favor of carbon-free power become harder and harder to refute. "The training wheels are off," said Joe Osha, an equity analyst at JMP Securities. "Prices have declined enough for both solar and wind that there's a path toward continued deployment in a post-subsidy world."

Read more of this story at Slashdot.

08:20

Chinese students in UK ripe target for scammers exploiting visa concerns [The Register]

Add in Brexit outsourcing mess and it's plain to see why young international scholars get duped

Scammers are exploiting Chinese students' Brexit fears by targeting them with phishing emails claiming their visas could be revoked, threat intel researchers say.…

08:19

A Total War Saga: TROY Seeing A Native Linux Port Next Year [Phoronix]

Creative Assembly revealed Total War Saga: TROY on Wednesday for release next year. Feral Interactive has announced they are porting this latest Total War game to macOS and Linux...

08:11

Saturday Morning Breakfast Cereal - Pearls [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
He showed so little desire for worldy pleasures that when he died he turned into a sweeeet pile of treasure.


Today's News:

Just 40 days till the new book launches!

08:01

Google is Bringing Its AI Assistant Service To People Without Internet Access [Slashdot]

An anonymous reader shares a report: Google Assistant, the digital assistant from the global search giant, is available to users through their smartphones, laptops, and smart speakers. Earlier this year, the company partnered with KaiOS to bring Assistant to some feature phones with internet access. Now Google is going a step further: Bringing its virtual assistant to people who have the most basic cellphone with no internet access. It's starting this program in India. At an event in New Delhi on Thursday, the company announced a 24x7 telephone line that anyone in India on Vodafone and Idea telecom networks (or Vodafone-Idea telecom network; as Vodafone owns Idea) could dial to have their questions answered. The company said it tested the phone line service with thousands of users across Lucknow and Kanpur before making it generally available. Users will be able to dial 000-800-9191-000 and they won't be charged for the call or the service. Manuel Bronstein, a VP at Google, said through this program the company is hoping to reach hundreds of millions of users in India who currently don't have access to smartphones or internet.

Read more of this story at Slashdot.

07:50

Huawei to lob devs $1.5bn in apparent effort to Trump-proof cloud and mobile ecosystem [The Register]

Nugget dropped in keynote focused on cloud and AI announcements

Connect 2019  Huawei will plough $1.5bn into a developer programme to swell the ranks of coders that write software for its kit in an apparent effort to counter moves by the American government to ban US suppliers from dealing with the Chinese firm.…

07:16

UK taxman wins tribunal case against BBC presenters [The Register]

The Beeb may have forced you into it, but you still have to cough up

HMRC has won an IR35 tribunal against BBC journalists Joanna Gosling, David Eades and Tim Willcox.…

07:00

'Personal Carbon Sequestration' Device Uses Algae To Remove CO2 From the Air [Slashdot]

An anonymous reader quotes a report from Fast Company: In the future, your office might have an extra appliance next to the copy machine and the refrigerator: an algae bioreactor. Designed to fit inside offices and eventually sit on the rooftops throughout cities, it can capture as much carbon from the atmosphere as an acre of trees. And there's an initial prototype already at work. Inside the bioreactor, algae does the work. "What's amazing about algae is it's really cheap and it's easy to grow -- the core things it needs are sunlight, CO2, and water," says Ben Lamm, CEO and founder of Hypergiant Industries, an AI-focused tech company that developed a prototype of the device, called the Eos Bioreactor. Because algae grows much more quickly than trees, it can also sequester carbon more quickly; the company estimates that the device, which optimizes the algae's ability to capture CO2, can sequester around two tons of carbon out of the air each year. The first version of the device, which is currently in operation, is three-by-three-by-seven feet. It's a closed system that works indoors, connecting with an HVAC system to reduce CO2 levels inside and release cleaner air. The closed system also makes it possible for the team to study how algae grows -- with sensors monitoring everything from light and heat and pH to the speed of growth and oxygen output -- and how the system can be tweaked to work best in different conditions outside on rooftops. "With the first generation Eos, we have precise control of every aspect of the algae's environment and life cycle," he says. "It's a photobioreactor, but it's also an experimentation platform. We'll be using this platform to better understand the environment that best suits biomass production under controlled circumstances, so that we can better understand how to design reactors for the variety of environmental conditions we're going to encounter in the wild." The team behind the device says they're working on mobile apps that can monitor and run the bioreactors autonomously. It's also "working on DIY plans that it will release next year so people can build the bioreactors at home," the report mentions.

Read more of this story at Slashdot.

Running The AMD "ABBA" Ryzen 3000 Boost Fix Under Linux With 140 Tests [Phoronix]

Last week AMD's AGESA "ABBA" update began shipping with a fix to how the boost clock frequencies are handled in hopes of better achieving the rated boost frequencies for Ryzen 3000 series processors. I've been running some tests of an updated ASUS BIOS with this adjusted boost clock behavior to see how it performs under Linux with a Ryzen 9 3900X processor.

06:07

Belgian F-16 pilot rescued from power line after emergency ejection [The Register]

Two-seat jet crashed in France

A Belgian F-16 fighter jet pilot has been rescued from a power line after getting into difficulties and ejecting from his stricken aircraft.…

05:23

IBM cuts ribbon on quantum computing centre wherein a 53-qubit monster lurks [The Register]

Can probably run Crysis

IBM has opened a quantum computing centre in Poughkeepsie, New York, which adds 10 quantum systems to Big Blue's fleet.…

04:51

Valve's ACO Shader Compiler For The Mesa Radeon Vulkan Driver Just Landed [Phoronix]

It was just two days ago that Valve's performance-focused "ACO" shader compiler was submitted for review to be included in Mesa for the "RADV" Radeon Vulkan driver. Just minutes ago that new shader compiler back-end was merged for Mesa 19.3...

04:47

UK launches online VAT inquiry following fears of Brexit fraudster surge [The Register]

Come on guys, we're losing £1.5bn per year

An inquiry into online value-added tax (VAT) fraud is being launched this autumn, following concerns there may be an uptick in scams after Brexit.…

04:16

Mesa's Disk Cache Code Now Better Caters To 4+ Core Systems [Phoronix]

Most Linux gamers these days should be running at least quad-core systems so Mesa 19.3 has been updated to reflect that reality with the number of CPU threads used by their disk cache...

04:10

Microsoft's Latest Open-Source Contribution: A New Font For Terminals & Code Editors [Phoronix]

This week Microsoft not only open-sourced their C++ standard library (STL) but they have also now shipped Cascadia Code...

04:03

WannaCry is still the smallpox of infosec. But the latest strain (sort of) immunises its victims [The Register]

Whatever you do, don't pay the ransom

Analysis  WannaCry – the file-scrambling ransomware that infamously locked up Britain's NHS and a bunch of other organisations worldwide in May 2017 – is still a live-ish threat to this day, infosec researchers reckon.…

04:02

BLK-IOCOST Merged For Linux 5.4 To Better Account For Cost Of I/O Workloads [Phoronix]

The Linux 5.4 block subsystem changes brings the new blk-iocost model...

04:00

AT&T Explores Parting Ways With DirecTV [Slashdot]

According to The Wall Street Journal, AT&T is exploring parting with its DirecTV unit as customers are leaving the service in droves. From the report: The telecom giant has considered various options, including a spinoff of DirecTV into a separate public company and a combination of DirecTV's assets with Dish Network, its satellite-TV rival, the people said. AT&T may ultimately decide to keep DirecTV in the fold. Despite the satellite service's struggles, as consumers drop their TV connections, it still contributes a sizable volume of cash flow and customer accounts to its parent. AT&T acquired DirecTV in 2015 for $49 billion. The company's shrinking satellite business is under a microscope after activist investor Elliott Management Corp. disclosed a $3.2 billion stake in AT&T last week and released a report pushing for strategic changes. Elliott has told investors that AT&T should unload DirecTV, The Wall Street Journal has previously reported. Jettisoning DirecTV would be an about-face for Mr. Stephenson, who billed the acquisition of the company as a bold move to diversify beyond the wireless phone business and tap into a growing media industry. The deal made AT&T the largest distributor of pay TV channels, ahead of Comcast. DirecTV is now part of an entertainment and consumer wireline unit that made up 27% of AT&T's $173.3 billion 2018 revenue. For Mr. Stephenson, who has helmed AT&T for 12 years, parting ways with DirecTV would be an acknowledgment that a major cornerstone of his diversification strategy hasn't gone as planned. It also adds pressure for AT&T to deliver on the promise of the Time Warner deal. Mr. Stephenson has signaled he is prepared to step down as CEO as soon as next year, the Journal reported last week. The Journal goes on to say that AT&T may ultimately decide to keep DirecTV because of "AT&T's towering net debt load, which stood at more than $160 billion earlier this year. The cash generated by the pay-TV giant has helped pay down that debt and fueled other investments in the rest of the company." "Any spinoff of DirecTV would be unlikely until mid-2020 at the earliest, five years after the deal closed, to make it a tax-efficient transaction for AT&T," the report adds.

Read more of this story at Slashdot.

03:30

Byte Night 2019 just weeks away: Ready to sleep on the streets for charity? [The Register]

Bed down for the night with tech's great and good at Action for Children fundraiser

That sound you can hear is the clock ticking down on your chance to register for this year’s Byte Night sleep out.…

03:00

We trained an AI to predict how bad a forest fire will be. It's just as good as a coin flip! [The Register]

What's that line? Your choices are half chance, so are everybody else's

Forest fires have apparently ravaged over four million acres of land across the United States so far this year, and the problem is only getting worse with global warming. Enter technology's hottest solution: Machine learning.…

02:46

Performance-Boosting DFSM Support Flipped On & Off For RADV Vulkan Driver [Phoronix]

Back in July of last year the RADV Vulkan driver enabled primitive binning and DFSM for this open-source Radeon Vulkan driver. Well, it thought it enabled DFSM support and paired with the binning did yield a minor performance benefit at the time for Raven Ridge APUs. But now it turns out the DFSM support wasn't properly wired up and is now addressed but is currently introducing a performance regression...

02:16

The Central Telegraph Office was serving spam 67 years before vikings sang about it on telly [The Register]

Farewell to St Paul's telco treasury

Geek's Guide to Britain  The BT Centre is an unremarkable-looking building just north of St Paul's Cathedral, nine storeys of Portland stone with straight unadorned 1980s lines softened by curved corners. The headquarters of the UK's largest telco could be mistaken for an apartment block if it wasn't for the company's logo.…

01:03

Btrfs & XFS File-Systems See More Fixes With Linux 5.4 [Phoronix]

The mature XFS and Btrfs file-systems continue seeing more fixes and cleaning with the now in-development Linux 5.4 kernel...

01:00

Navy Confirms Existence of UFOs Seen In Leaked Footage [Slashdot]

A Navy official has confirmed that recently released videos of unidentified flying objects are real, but that the footage was not authorized to be released to the public in the first place. From a report: Joseph Gradisher, the spokesman for the Deputy Chief of Naval Operations for Information Warfare, confirmed to TIME that three widely-shared videos captured "Unidentified Aerial Phenomena." Gradisher initially confirmed this in a statement to "The Black Vault" a website dedicated to declassified government documents. "The Navy designates the objects contained in these videos as unidentified aerial phenomena," Gradisher told the site. He tells TIME that he was "surprised" by the press coverage surrounding his statement to the site, particularly around his classification of the incursions as "unidentifiable," but says that he hopes that leads to UAP's being "de-stigmatized." "The reason why I'm talking about it is to drive home the seriousness of this issue," Gradisher says. "The more I talk, the more our aviators and all services are more willing to come forward." Gradisher would not speculate as to what the unidentified objects seen in the videos were, but did say they are usually proved to be mundane objects like drones -- not alien spacecraft. "The frequency of incursions have increased since the advents of drones and quadcopters," he says. The three videos of UFOs were published by the New York Times and "To the Stars Academy of Arts and Science," a self-described "public benefit corporation" co-founded by Tom DeLonge, best known as the vocalist and guitarist for the rock band, Blink-182.

Read more of this story at Slashdot.

Wednesday, 18 September

23:55

IT now stands for Intermediate Targets: Tech providers pwned by snoops eyeing up customers – report [The Register]

Symantec says Tortoiseshell crew ransacked suppliers

Miscreants are hacking into Saudi Arabian IT providers in an attempt to compromise their real targets: said providers' customers, according to Symantec.…

23:02

Gasp! Google Chrome kills uBlock, Adblock ad filters – grab the pitchfo- no wait, it's OK: They were evil fraud clones [The Register]

Extensions used by nearly 2m people a week pretend to be legit add-ons, stuff cookies to make bank

On Wednesday, Google nuked two ad-blocking Chrome extensions that appear to have been designed to conduct affiliate-marketing fraud.…

21:30

Research Finds Black Carbon Breathed By Mothers Can Cross Into Unborn Children [Slashdot]

An anonymous reader quotes a report from The Guardian: Air pollution particles have been found on the fetal side of placentas, indicating that unborn babies are directly exposed to the black carbon produced by motor traffic and fuel burning. The research is the first study to show the placental barrier can be penetrated by particles breathed in by the mother. It found thousands of the tiny particles per cubic millimeter of tissue in every placenta analyzed. The link between exposure to dirty air and increased miscarriages, premature births and low birth weights is well established. The research suggests the particles themselves may be the cause, not solely the inflammatory response the pollution produces in mothers. The research, published in the journal Nature Communications, examined 25 placentas from non-smoking women in the town of Hasselt. It has particle pollution levels well below the EU limit, although above the WHO limit. Researchers used a laser technique to detect the black carbon particles, which have a unique light fingerprint. In each case, they found nanoparticles on the fetal side of the placenta and the number correlated with air pollution levels experienced by the mothers. There was an average of 20,000 nanoparticles per cubic millimeter in the placentas of mothers who lived near main roads. For those further away, the average was 10,000 per cubic millimeter. They also examined placentas from miscarriages and found the particles were present even in 12-week-old fetuses.

Read more of this story at Slashdot.

20:03

C-Section Babies Have More Potentially Infectious Gut Bacteria [Slashdot]

Scientists from the Wellcome Sanger Institute, UCL, the University of Birmingham and their collaborators discovered that whereas vaginally born babies got most of their gut bacteria from their mother, babies born via caesarean did not, and instead had more bacteria associated with hospital environments in their guts. Science Daily reports: The exact role of the baby's gut bacteria is unclear and it isn't known if these differences at birth will have any effect on later health. The researchers found the differences in gut bacteria between vaginally born and caesarean delivered babies largely evened out by 1 year old, but large follow-up studies are needed to determine if the early differences influence health outcomes. Experts from the Royal College of Obstetricians and Gynaecologists say that these findings should not deter women from having a caesarean birth. Published in Nature today, this largest ever study of neonatal microbiomes also revealed that the microbiome of vaginally delivered newborns did not come from the mother's vaginal bacteria, but from the mother's gut. This calls into question the controversial practice of swabbing babies born via caesarean with mother's vaginal bacteria. Understanding how the birth process impacts on the baby's microbiome will enable future research into bacterial therapies.

Read more of this story at Slashdot.

19:45

Remember that security probe that ended with a sheriff cuffing the pen testers? The contract is now public so you can decide who screwed up [The Register]

Both sides have different interpretations of the rules

The infosec duo cuffed during an IT penetration test that went south last week are out of jail, though not necessarily out of the woods.…

19:45

AI Can't Protect Us From Deepfakes, Argues New Report [Slashdot]

A new report from Data and Society raises doubts about automated solutions to deceptively altered videos, including machine learning-altered videos called deepfakes. Authors Britt Paris and Joan Donovan argue that deepfakes, while new, are part of a long history of media manipulation -- one that requires both a social and a technical fix. Relying on AI could actually make things worse by concentrating more data and power in the hands of private corporations. The Verge reports: As Paris and Donovan see it, deepfakes are unlikely to be fixed by technology alone. "The relationship between media and truth has never been stable," the report reads. In the 1850s when judges began allowing photographic evidence in court, people mistrusted the new technology and preferred witness testimony and written records. By the 1990s, media companies were complicit in misrepresenting events by selectively editing out images from evening broadcasts. In the Gulf War, reporters constructed a conflict between evenly matched opponents by failing to show the starkly uneven death toll between U.S. and Iraqi forces. "These images were real images," the report says. "What was manipulative was how they were contextualized, interpreted, and broadcast around the clock on cable television." Today, deepfakes have taken manipulation even further by allowing people to manipulate videos and images using machine learning, with results that are almost impossible to detect with the human eye. Now, the report says, "anyone with a public social media profile is fair game to be faked." Once the fakes exist, they can go viral on social media in a matter of seconds. [...] Paris worries AI-driven content filters and other technical fixes could cause real harm. "They make things better for some but could make things worse for others," she says. "Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life."

Read more of this story at Slashdot.

19:25

Workers Accuse Kickstarter of Union-Busting In Federal Complaint [Slashdot]

On Monday night, unionizing employees at Kickstarter filed a complaint with the National Labor Review Board (NLRB) for allegedly wrongfully terminating two employees. Both of the employees were on the Kickstarter United organizing campaign. Motherboard reports: Kickstarter told Motherboard that the workers, Clarissa Redwine and Taylor Moore, were fired over performance issues within the past two weeks. But employees at Kickstarter are accusing the company of "discharging employees" because "they joined or supported a labor organization and in order to discourage union activities," according to the NLRB complaint, which was first reported and obtained by Slate's April Glaser. A third employee and member of the Kickstarter United organizing committee, Travis Brace, was informed on Thursday that he would no longer be needed in his role. In a September 12 email obtained by Motherboard, Aziz Hasan, the CEO of Kickstarter, wrote to employees, "There have been allegations that we are retaliating against union organizing. Those allegations are not true. No Kickstarter employee has been or ever will be fired for union organizing." Redwine says the company complained to her in recent months that she was not satisfactorily working with her managers. She claims that she was not given specific guidance on how she could improve. "Suddenly, after becoming a public union organizer, I started to get very strong negative feedback," Redwine told Motherboard. "After my best quarter at the company, I was told I was being put on a Performance Improvement Plan for slippery reasons like not building trust with my managers. I asked how progress would be tracked over and over and only received answers akin to 'just trust us.' I assume they never crafted the Performance Improvement Plan because they couldn't come up with anything concrete for me to improve." Redwine and Moore are asking for back pay and to be reinstated to their positions. In response to the complaint, Kickstarter said: "We'll be providing the NLRB with information about these firings and supporting documentation." Kickstarter told Motherboard that it "recently terminated two employees for performance reasons. A third was working on a service we shut down, so his role was eliminated, and there were no other positions here that would be a strong fit. That staff member will be transitioning out of the company. All three of these employees were members of the organizing committee, but this has nothing to do with their departures. (We have fired three other people who were not organizers since March.)" "We expect all employees -- including union organizers -- to be able to perform in their role and set up their teams and colleagues for success. We use a range of approaches -- twice-a-year performance reviews, peer feedback, manager feedback, one-on-one coaching and, in some cases, mediation -- to ensure that employees have the support they need to meet those expectations. When someone has been through this process and we have sufficient evidence that they are not meeting expectations, we must unfortunately part ways with them," the company continued.

Read more of this story at Slashdot.

18:45

New Eco-Friendly Game Packaging Could Save Tons of Plastic Each Year [Slashdot]

An anonymous reader quotes a report from Ars Technica: Sega and Sports Interactive have announced that Football Manager 2020 will be sold in new eco-friendly package that uses much less plastic, and they're pushing for the rest of the entertainment industry to follow suit. The new packaging replaces the now-standard plastic DVD case used for most game discs with a folded, reinforced cardboard sleeve made of 100% recycled fiber. The shrinkwrap surrounding that package has also been replaced with a low-density LDPE polyethylene that's highly recyclable. Even the ink on the cardboard has been changed out for a vegetable-and-water-based version (so it's technically vegan if you're desperate for a snack). The new packaging does cost a bit more to produce -- about 20 (British) cents per unit (or 30 percent), according to an open letter from Sports Interactive Studio Director Miles Jacobson. But those costs are somewhat offset by reduced shipping and destruction costs for excess units, he added. And as Spanish footballer Hector Bellerin says in a video accompanying the letter, "if there's no Earth, there's no money to spend." All told, Jacobson says the new packaging will save 55 grams of plastic per unit, or 20 tonnes across a print run of over 350,000. That's an extremely tiny dent in the estimated 335 million tons of plastic that is produced annually worldwide. But Jacobson hopes it could add up to a sizable dent if the entire industry follows suit for the tens of millions of discs it produces each year. "We're not the biggest game in the world," Jacobson said. "Imagine what happens if every other game, every film company, every music company switches to this packaging... So I'm throwing down the gauntlet here to ALL entertainment companies who use plastic for their Blu Ray, DVD and CD packaging."

Read more of this story at Slashdot.

18:03

Apple Is Trying To Trademark 'Slofie' [Slashdot]

On Friday, Apple applied for a U.S. Trademark on "Slofie," a made-up name for slow-motion selfies, a feature that's new to the iPhone 11 models. "The phones' front camera can now record video at 120 frames per second, which when slowed down, results in a crisp slow-motion effect," writes Jacob Kastrenakes for The Verge. "The results are neat, though I'm not convinced they'll turn into the Animoji-like phenomenon Apple may be hoping for." From the report: Apple is applying for a trademark on slofies in connection with "downloadable computer software for use in capturing and recording video." That means this trademark seems to be more about preventing other companies from making slofie-branded camera apps than it is about limiting popular usage of this totally made-up word. Apple has reason to want to prevent the creation of knock-off slofie apps, too, since slofies are meant to be exclusive to the new iPhones. Despite the focus on apps, Apple doesn't actually offer a slofie app or a slofie mode on the new iPhones. The feature is just called "slo-mo" in Apple's camera app, and the company's current usage of slofie refers exclusively to the resulting videos, not the app or mode used to capture them. Apple seems to be hoping slofies will be a fun selling point for its new phones. The feature is mentioned across Apple's website, and Apple presented a slofie ad during the phones' launch event. It wouldn't be surprising to see a lot more airing in the coming weeks once the phones are out.

Read more of this story at Slashdot.

17:56

Class-action sueball over refurbed iThings will ask Apple what 'as good as new' means [The Register]

Remanufactured kit never as reliable, complainants claim

A judge in California has OK'd a class-action lawsuit against Apple for alleged breaches of its AppleCare warranty schemes.…

17:30

Intel's Gallium3D OpenGL Driver Taps Another Optimization - ~32% For GFXBench [Phoronix]

Intel's new OpenGL Linux driver, their Gallium3D-based "Iris" implementation that is aiming to be the default before year's end, continues making striking progress...

17:27

How long is a lifetime? If you’re Comcast, it’s until a rival quits a city: ISP 'broke' price promise [The Register]

Angry cable subscriber sues, claims 'never-ending' deal actually lasted just three years

Maintaining its hard-won reputation for being one of the most-hated companies in America, Comcast has seemingly redefined the meaning of the word “lifetime” – and received a lawsuit in response.…

17:25

Exposed RDP Servers See 150K Brute-Force Attempts Per Week [Slashdot]

Slashdot reader Cameyo shares a report from TechRepublic: Remote Desktop Protocol (RDP) is -- to the frustration of security professionals -- both remarkably insecure and indispensable in enterprise computing. The September 2019 Patch Tuesday round closed two remote code execution bugs in RDP, while the high-profile BlueKeep and DejaBlue vulnerabilities from earlier this year have sent IT professionals in a patching frenzy. With botnets brute-forcing over 1.5 million RDP servers worldwide, a dedicated RDP security tool is needed to protect enterprise networks against security breaches. Cameyo released on Wednesday an open-source RDP monitoring tool -- appropriately titled RDPmon -- for enterprises to identify and secure against RDP attacks in its environment. The tool provides a visualization of the total number of attempted RDP connections to servers, as well as a view of the currently running applications, the number of RDP users, and what programs those users are running, likewise providing insight to the existence of unapproved software. RDPmon operates entirely on-premise, the program data is not accessible to Cameyo. Customers of Cameyo's paid platform can also utilize the RDP Port Shield feature, also released Wednesday, which opens RDP ports for authenticated users by setting IP address whitelists in Windows Firewall when users need to connect. RDP was designed with the intent to be run inside private networks, not accessible over the internet. Despite that, enterprise use of RDP over the internet is sufficiently widespread that RDP servers are a high-profile, attractive target for hackers. The report says Cameyo found that Windows public cloud machines on default settings -- that is, with port 3389 open -- experience more than 150,000 login attempts per week.

Read more of this story at Slashdot.

17:00

What is Google up to with Anthos? More toys dropped for Kubernetes-style hybrid cloud [The Register]

Service Mesh and Cloud Run put existing features in a pretty wrapper

Google talked up new features for Anthos, its take on hybrid cloud, at an event in New York earlier this week.…

16:45

Facebook Plans Launch of Its Own 'Supreme Court' For Handling Takedown Appeals [Slashdot]

An anonymous reader quotes a report from Ars Technica: Facebook, which has managed to transcend geographic borders to draw in a population equal to roughly a third of all human life on Earth, has made its final charter for a "Supreme Court" of Facebook public. The company pledges to launch this initiative by November of next year. The new Oversight Board will have five key powers, according to a charter (PDF) Facebook released yesterday. It can "request that Facebook provide information" it needs in a timely manner; it can make interpretations of Facebook standards and guidelines "in light of Facebook's articulated values"; and it can instruct the company to allow or remove content, to uphold or reverse a decision leading to content being permitted or removed, and to issue "prompt, written explanations of the board's decisions." "If someone disagrees with a decision we've made, they can appeal to us first, and soon they will be able to further appeal this to the independent board," company CEO Mark Zuckerberg wrote in a letter (PDF). "As an independent organization, we hope it gives people confidence that their views will be heard and that Facebook doesn't have the ultimate power over their expression." The board will launch with at least 11 members and should eventually get up to 40. The entity will contract its services to Facebook. Participants will serve a maximum of three three-year terms each and will be paid for their time. Their decisions will "be made publicly available and archived in a database of case decisions," with details subject to certain data or privacy restrictions. Facebook can also contact the board for an "automatic and expedited review" in exceptional circumstances, "when content could result in urgent real world consequences," such as, for example, if a mass-murderer is livestreaming his crimes. The panel's decisions will be binding, Facebook added, and the company will implement its findings promptly, "unless implementation of a resolution could violate the law."

Read more of this story at Slashdot.

16:05

Programmers Complain that Huawei's Ark Compiler is 'Not Even Half-Finished' [Slashdot]

A scam. A publicity stunt. Premature. These are just a few of the things Chinese developers are saying about the release of Huawei's supposed secret weapon: The Ark Compiler. From a report: Developers are even claiming the program feels incomplete. The reception has been so bad that one programmer told Abacus that he wondered whether it was released just for publicity. "Maybe they're doing it to help in the PR and trade war, adding leverage against the US," said Max Zhou, co-founder of app-enhancement company MetaApp and former head of engineering at Mobike. The Ark Compiler is a key component of Huawei's new operating system, HarmonyOS. The tool is meant to allow developers to quickly port their Android apps to the new OS, ideally helping to quickly bridge the gap of app availability. It is also said to be able to improve the efficiency of Android apps, making them as smooth as apps on iOS. As of right now, though, developers say promises are too good to be true.

Read more of this story at Slashdot.

15:47

Woman sues Lyft, says driver gang-raped her at gunpoint – and calls for app safety measures we can't believe aren't already in place [The Register]

Sex assault survivor reveals her two-year fight for justice

Analysis  A woman who says she was subjected to a horrific rape at the hands of her Lyft driver has sued the tech biz.…

15:25

India Bans E-cigarettes as Global Vaping Backlash Grows [Slashdot]

India has announced a ban on electronic cigarettes, as a backlash gathers pace worldwide about a technology promoted as less harmful than smoking tobacco. From a report: The announcement by India on Wednesday came a day after New York became the second US state to ban flavored e-cigarettes following a string of vaping-linked deaths. "The decision was made keeping in mind the impact that e-cigarettes have on the youth of today," India's finance minister, Nirmala Sitharaman, told reporters in the capital, New Delhi. E-cigarettes heat up a liquid -- tasting of anything from bourbon to bubble gum or just tobacco, and which usually contains nicotine -- into vapor, which is inhaled. The vapor does not contain the estimated 7,000 chemicals present in tobacco smoke but does contain a number of substances that could potentially be harmful. They have been pushed by producers, and also by some governments, including in Europe, as a safer alternative to cigarette smoking -- and as a way to kick the habit.

Read more of this story at Slashdot.

15:16

Debian May Need To Re-Evaluate Its Interest In "Init System Diversity" [Phoronix]

Debian Project Leader Sam Hartman has shared his August 2019 notes where he outlines the frustrations and issues that have come up as a result of init system diversity with some developers still aiming to viably support systemd alternatives within Debian...

14:45

The FBI Tried To Plant a Backdoor in an Encrypted Phone Network [Slashdot]

The FBI tried to force the owner of an encrypted phone company to put a backdoor in his devices, Motherboard has learned. From the report: The company involved is Phantom Secure, a firm that sold privacy-focused BlackBerry phones and which ended up catering heavily to the criminal market, including members of the Sinaloa drug cartel, formerly run by JoaquÃn "El Chapo" Guzman. The news signals some of the tactics law enforcement may use as criminals continue to leverage encrypted communications for their own ends. It also comes as Canadian media reported that a former top official in the Royal Canadian Mounted Police (RCMP), who has been charged with leaking state secrets, offered to sell information to Vincent Ramos, Phantom's CEO. "He was given the opportunity to do significantly less time if he identified users or built in/gave backdoor access," one source who knows Ramos personally and has spoken with him about the issue after his arrest told Motherboard. A backdoor is a general term for some form of technical measure that grants another party, in this case the FBI, surreptitious access to a computer system. What exactly the FBI was technically after is unclear, but the desire for a backdoor was likely to monitor Phantom's clients.

Read more of this story at Slashdot.

14:17

Scotiabank slammed for 'muppet-grade security' after internal source code and credentials spill onto open internet [The Register]

Blueprints for mobile apps, databases exposed in public GitHub repos

Exclusive  Scotiabank leaked online a trove of its internal source code, as well as some of its private login keys to backend systems, The Register can reveal.…

14:14

Samba 4.11 Released With Much Better Scalability While Disabling SMB1 By Default [Phoronix]

Samba 4.11 is out as the latest big feature update to this SMB/CIFS/AD implementation for offering better Windows interoperability with Linux and other platforms. The changes in Samba 4.11 are aplenty that we are a bit surprised it wasn't called Samba 5.0...

14:05

California Governor Signs Labor Law, Setting Up Bitter Gig Economy Fight [Slashdot]

California Governor Gavin Newsom signed a sweeping new law that could force gig companies like Uber and Lyft to reclassify their workers as employees. From a report: The hotly contested legislation, Assembly Bill 5, dictates that workers can generally only be considered contractors if they are doing work that is outside the usual course of a company's business. The law codifies a 2018 state supreme court ruling, and applies it to a wide range of state laws. It could upend the business models of companies that depend on armies of independent contractors, who aren't guaranteed employment protections like minimum wage and overtime. The bill is slated to go into effect on Jan. 1. While the legislature has adjourned until next year, fierce lobbying and deal-making efforts are expected to continue in the meantime, and could potentially yield separate legislation in 2020. In a statement, Newsom called the bill "landmark legislation," and said that, "A next step is creating pathways for more workers to form a union, collectively bargain to earn more, and have a stronger voice at work -- all while preserving flexibility and innovation." Lorena Gonzalez, the state assemblywoman who authored the bill, said in a statement that, "California is now setting the global standard for worker protections for other states and countries to follow." Further reading: Drivers? Never Heard of Them, Says Uber.

Read more of this story at Slashdot.

13:34

GitHub gobbles biz used by NASA, Google, etc to search code for bugs and security holes in Mars rovers, apps... [The Register]

Semmle's flaw-finding queries can be shared and used on multiple projects

On Wednesday, Microsoft's GitHub said it has acquired Semmle, a San Francisco-based software analysis platform for finding vulnerabilities in code. No price was disclosed.…

13:25

A Lunar Space Elevator Is Actually Feasible and Inexpensive, Scientists Find [Slashdot]

An anonymous reader shares a report: In a paper [PDF] published on the online research archive arXiv, Columbia astronomy students Zephyr Penoyre and Emily Sandford proposed the idea of a "lunar space elevator," which is exactly what it sounds like -- a very long elevator connecting the moon and our planet. The concept of a moon elevator isn't new. In the 1970s, similar ideas were floated in science fiction (Arthur C. Clarke's The Fountains of Paradise, for example) and by academics like Jerome Pearson and Yuri Artsutanov. But the Columbia study differs from previous proposal in an important way: instead of building the elevator from the Earth's surface (which is impossible with today's technology), it would be anchored on the moon and stretch some 200,000 miles toward Earth until hitting the geostationary orbit height (about 22,236 miles above sea level), at which objects move around Earth in lockstep with the planet's own rotation. Dangling the space elevator at this height would eliminate the need to place a large counterweight near Earth's orbit to balance out the planet's massive gravitational pull if the elevator were to be built from ground up. This method would also prevent any relative motion between Earth's surface and space below the geostationary orbit area from bending or twisting the elevator. These won't be problems for the moon because the lunar gravitational pull is significantly smaller and the moon's orbit is tidally locked, meaning that the moon keeps the same face turned toward Earth during its orbit, therefore no relative motion of the anchor point.

Read more of this story at Slashdot.

13:00

Uni sysadmins, don't relax. Cybercrooks are still after your crown jewels, warns NCSC [The Register]

GCHQ offshoot says be on your guard

Cybercrims are still likely to affect universities and other educational institutions online with ransomware, reckons GCHQ offshoot the National Cyber Security Centre.…

12:45

Amazon Will Soon Let You Make Campaign Contributions Through Your Alexa Device [Slashdot]

On Thursday, you'll be able to make campaign donations to 2020 presidential candidates through your Amazon Alexa devices -- or at least to those candidates whom Amazon deems eligible to set up an account. From a report: If a campaign chooses to sign up for Alexa donations, you'll be able to donate to it by merely saying, "Alexa, I want to make a political contribution," or "Alexa, donate [amount] to [candidate name]." All donations will be processed through Amazon Pay, and users will receive email receipts for their contributions as well. Strangely, the feature is only available to 2020 presidential candidates Amazon defines as "principal campaign committees." It's not apparent who Amazon considers "principal" and for what reasons. The contribution will be limited between $5 to $200.

Read more of this story at Slashdot.

12:05

Have Flagship Smartphone Prices Peaked? [Slashdot]

Analyst Ben Wood, writing for research firm CCS Insight: Smartphone makers have been testing the economic rule of supply and demand for the past decade, seemingly defying conventional wisdom in consumer electronics products by raising prices. Greater utility and the constant of use smartphones combined to grow the value of devices to customers. But it seems that top phone-makers are learning that no tree grows to heaven, as prices beyond the psychological threshold of $1,000 have created sticker shock among some consumers. Apple's announcement of the iPhone 11 at its annual product event last week largely centered on incremental improvements such as better camera and battery life, but the company's decision to lower the price of its base flagship smartphone caught our eye. The iPhone 11 will cost $699 in the US. A year ago, Apple introduced the iPhone XR at $749. It's a subtle, but interesting move that sees Apple shifting its "mid-range" iPhone back to a price of $699, where it previously resided with the iPhone 8. Apple's decision to lower pricing can be seen as an acknowledgement that it has tested the upper limits of consumer acceptance. At a time when the company wants to expand its number of customers as it builds out its ecosystem of content and services, it's sensible that it slightly brought down the barriers for consumers to get their hands on the new device.

Read more of this story at Slashdot.

12:00

Backup biz Acronis ascends to unicorndom after $147m splurge led by Goldman Sachs [The Register]

Cash for acquisitions, new hires and data centre expansion

Goldman Sachs has led a $147m funding round in Acronis which values the data recovery and protection vendor at more than a billion dollars – unicorn status.…

11:27

Steam Play's Proton 4.11-5 Released With Fixes & Optimizations [Phoronix]

Barely a week since the release of Proton 4.11-4, Valve's stellar Linux crew in cooperation with CodeWeavers have issued Proton 4.11-5 as the latest update to this Wine 4.11 downstream that powers Steam Play for running Windows games on Linux...

11:25

IBM's New 53-qubit Quantum Computer is Its Biggest Yet [Slashdot]

IBM's 14th quantum computer is its most powerful so far, a model with 53 of the qubits that form the fundamental data-processing element at the heart of the system. From a report: The system, available online to quantum computing customers in October, is a big step up from the last IBM Q machine with 20 qubits and should help advance the marriage of classical computers with the crazy realm of quantum physics. Quantum computing remains a highly experimental field, limited by the difficult physics of the ultra-small and by the need to keep the machines refrigerated to within a hair's breadth of absolute zero to keep outside disturbances from ruining any calculations. But if engineers and scientists can continue the progress, quantum computers could help solve computing problems that are, in practice, impossible on today's classical computers. That includes things like simulating the complexities of real-world molecules used in medical drugs and materials science, optimizing financial investment performance, and delivering packages with a minimum of time and fuel.

Read more of this story at Slashdot.

11:00

Adobe results show it is still creaming those subscriptions but its share price fell – why? [The Register]

Q3 figures = good, Q4 targets = vague

A day after Adobe posted hugely profitable Q3 results, its share price dropped by as much as 5 per cent (~$270) and has been bouncing around the sub 3 per cent mark (~$275) for the remainder of trading.…

10:47

How Long Before These Salmon Are Gone? 'Maybe 20 Years' [Slashdot]

An anonymous reader shares a report: The Middle Fork of the Salmon River, one of the wildest rivers in the contiguous United States, is prime fish habitat. Cold, clear waters from melting snow tumble out of the Salmon River Mountains and into the boulder-strewn river, which is federally protected. The last of the spawning spring-summer Chinook salmon arrived here in June after a herculean 800-mile upstream swim. Now the big fish -- which can weigh up to 30 pounds -- are finishing their courtship rituals. Next year there will be a new generation of Chinook. In spite of this pristine 112-mile-long mountain refuge, the fish that have returned here to reproduce and then die for countless generations are in deep trouble. Some 45,000 to 50,000 spring-summer Chinook spawned here in the 1950s. These days, the average is about 1,500 fish, and declining. And not just here: Native fish are in free-fall throughout the Columbia River basin, a situation so dire that many groups are urging the removal of four large dams to keep the fish from being lost. "The Columbia River was once the most productive wild Chinook habitat in the world," said Russ Thurow, a fisheries research scientist with the Forest Service's Rocky Mountain Research Station. Standing alongside the Salmon River in Idaho, Mr. Thurow considered the prospect that the fish he had spent most of his life studying could disappear. "It's hard to say, but now these fish have maybe four generations left before they are gone," he said. "Maybe 20 years."

Read more of this story at Slashdot.

10:30

Microsoft exFAT File-System Mailed In For Linux 5.4 Along With Promoted EROFS & Greybus [Phoronix]

Greg Kroah-Hartman began volleying his Linux 5.4 kernel pull requests today of the subsystems he oversees. The most significant of this morning's pull requests are the staging area changes that include the Microsoft exFAT file-system support...

10:04

Smart TVs, Smart-Home Devices Found To Be Leaking Sensitive User Data, Researchers Find [Slashdot]

Smart-home devices, such as televisions and streaming boxes, are collecting reams of data -- including sensitive information such as device locations -- that is then being sent to third parties like advertisers and major tech companies, researchers said Tuesday. From a report: As the findings show, even as privacy concerns have become a part of the discussion around consumer technology, new devices are adding to the hidden and often convoluted industry around data collection and monetization. A team of researchers from Northeastern University and the Imperial College of London found that a variety of internet-connected devices collected and distributed data to outside companies, including smart TV and TV streaming devices from Roku and Amazon -- even if a consumer did not interact with those companies. "Nearly all TV devices in our testbeds contacts Netflix even though we never configured any TV with a Netflix account," the Northeastern and Imperial College researchers wrote. The researchers tested a total of 81 devices in the U.S. and U.K. in an effort to gain a broad idea of how much data is collected by smart-home devices, and where that data goes.

Read more of this story at Slashdot.

10:00

Analytics exec nicked as Ecuador tries to rush through privacy laws after massive data leak [The Register]

Government gave them the deets, so not a hacking charge

The head of Novaestrat, the data analytics company at the centre of the huge leak revealed on Monday involving personal information about more than 20 million Ecuadorian citizens, has been taken into custody.…

09:24

Crypto-mining Malware Saw New Life Over the Summer as Monero Value Tripled [Slashdot]

Malware that mines cryptocurrency made a comeback over the summer, with an increased number of campaigns being discovered and documented by cyber-security firms. From a report: The primary reason for this sudden resurgence is the general revival of the cryptocurrency market, which saw trading prices recover after a spectacular crash in late 2018. Monero, the cryptocurrency of choice of most crypto-mining malware operations, was one of the many cryptocurrencies that were impacted by this market slump. The currency also referred to as XMR, has gone down from an exchange rate that orbited around $300 - $400 in late 2017 to a meager $40 - $50 at the end of 2018. But as the Monero trading price recovered throughout 2018, tripling its value from $38 at the start of the year, to nearly $115 over the summer, so have malware campaigns. These are criminal operations during which hackers infect systems with malware that's specifically designed to secretly mine Monero behind the computer owner's back. Starting with the end of May, the number of reports detailing crypto-mining campaigns published by cyber-security firms has exploded, with a new report published each week, and sometimes new campaigns being uncovered on a daily basis.

Read more of this story at Slashdot.

09:20

Ebuygumm doesn't break t' Nominet rules, eBay and Gumtree told [The Register]

By 'eck! Geoffrey Boycott-inspired domain sees off tat bazaar challenge

eBay and Gumtree have lost a legal fight to kill off a British wannabe rival thanks to Geoffrey Boycott's usage of a well-known Yorkshireism.…

09:00

FreeBSD 12 & DragonFlyBSD 5.6 Running Well On The AMD Ryzen 7 3700X + MSI X570 GODLIKE [Phoronix]

For those wondering how well FreeBSD and DragonFlyBSD are handling AMD's new Ryzen 3000 series desktop processors, here are some benchmarks on a Ryzen 7 3700X with MSI MEG X570 GODLIKE where both of these popular BSD operating systems were working out-of-the-box. For some fun mid-week benchmarking, here are those results of FreeBSD 12.0 and DragonFlyBSD 5.6.2 up against openSUSE Tumbleweed and Ubuntu 19.04.

08:45

UK.gov confirms: Yes, our former DWP perm sec will join Salesforce [The Register]

No lobbying for at least another five months

The government has confirmed former permanent secretary at the Department for Work and Pensions, Robert Devereux, has rocked up at ethical SaaS oufit Salesforce as veep of global public sector.…

08:43

AI Learned To Use Tools After Nearly 500 Million Games of Hide and Seek [Slashdot]

In the early days of life on Earth, biological organisms were exceedingly simple. They were microscopic unicellular creatures with little to no ability to coordinate. Yet billions of years of evolution through competition and natural selection led to the complex life forms we have today -- as well as complex human intelligence. Researchers at OpenAI, the San-Francisco-based for-profit AI research lab, are now testing a hypothesis: if you could mimic that kind of competition in a virtual world, would it also give rise to much more sophisticated artificial intelligence? From a report: The experiment builds on two existing ideas in the field: multi-agent learning, the idea of placing multiple algorithms in competition or coordination to provoke emergent behaviors, and reinforcement learning, the specific machine-learning technique that learns to achieve a goal through trial and error. In a new paper released today, OpenAI has now revealed its initial results. Through playing a simple game of hide and seek hundreds of millions of times, two opposing teams of AI agents developed complex hiding and seeking strategies that involved tool use and collaboration. The research also offers insight into OpenAI's dominant research strategy: to dramatically scale existing AI techniques to see what properties emerge.

Read more of this story at Slashdot.

08:03

Cloudflare’s Approach to Research [The Cloudflare Blog]

Cloudflare’s Approach to Research
Cloudflare’s Approach to Research

Cloudflare’s mission is to help build a better Internet. One of the tools used in pursuit of this goal is computer science research. We’ve learned that some of the difficult problems to solve are best approached through research and experimentation to understand the solution before engineering it at scale. This research-focused approach to solving the big problems of the Internet is exemplified by the work of the Cryptography Research team, which leverages research to help build a safer, more secure and more performant Internet. Over the years, the team has worked on more than just cryptography, so we’re taking the model we’ve developed and expanding the scope of the team to include more areas of computer science research. Cryptography Research at Cloudflare is now Cloudflare Research. I am excited to share some of the insights we’ve learned over the years in this blog post.

Cloudflare’s research model

Principle Description
Team structure Hybrid approach. We have a program that allows research engineers to be embedded into product and operations teams for temporary assignments. This gives people direct exposure to practical problems.
Problem philosophy Impact-focused. We use our expertise and the expertise of partners in industry and academia to select projects that have the potential to make a big impact, and for which existing solutions are insufficient or not yet popularized.
Promoting solutions Open collaboration. Popularizing winning ideas through public outreach, working with industry partners to promote standardization, and implementing ideas at scale to show they’re effective.

The hybrid approach to research

“Super-ambitious goals tend to be unifying and energizing to people; but only if they believe there's a chance of success.” - Peter Diamandis

Given the scale and reach of Cloudflare, research problems (and opportunities) present themselves all the time. Our approach to research is a practical one. We choose to tackle projects that have the potential to make a big impact, and for which existing solutions are insufficient. This stems from a belief that the interconnected systems that make up the Internet can be changed and improved in a fundamental way. While some research problems are solvable in a few months, some may take years. We don’t shy away from long-term projects, but the Internet moves fast, so it’s important to break down long-term projects into smaller, independently-valuable pieces in order to continually provide value while pursuing a bigger vision.

Successful technological innovation is not purely about technical accomplishments. New creations need the social and political scaffolding to support it while being built, and the momentum and support to gain popularity. We are better able to innovate if grounded in a deep understanding of the current day-to-day. To stay grounded, our research team members spend part of their time solving practical problems that affect Cloudflare and our customers right now.

Cloudflare employs a hybrid research model similar to the model pioneered by Google. Innovation can come from everywhere in a company, so teams are encouraged to find the right balance between research and engineering activities. The research team works with the same tools, systems, and constraints as the rest of the engineering organization.

Research engineers are expected to write production-quality code and contribute to engineering activities. This enables researchers to leverage the rich data provided by Cloudflare’s production environment for experiments. To further break down silos, we have a program that allows research engineers to be embedded into product and operations teams for temporary assignments. This gives people direct exposure to practical problems.

Continuing a successful tradition (our tradition)

“Skate to where the puck is going, not where it has been.” - Wayne Gretzky

The output of the research team is both new knowledge and technology that can lead to innovative products. Research works hand-in-hand with both product and engineering to help drive long-term positive outcomes for both Cloudflare and the Internet at large.

An example of a long-term project that requires both research and engineering is helping the Internet migrate from insecure to secure network protocols. To tackle the problem, we pursued several smaller projects with discrete and measurable outcomes. This included:

and many other smaller projects. Each step along the way contributed something concrete to help make the Internet more secure.

This year’s Crypto Week is a great example of the type of impact an effective hybrid research organization can make. Every day that week, a new announcement was made that helped take research results and realize their practical impact. From the League of Entropy, which is based on fundamental work by researchers at EPFL, to Cloudflare Time Services, which helps address time security issues raised in papers by former Cloudflare intern Aanchal Malhotra, to our own (currently running) post-quantum experiment with Google Chrome, engineers at Cloudflare combined research with building large-scale production systems to help solve some unsolved problems on the Internet.

Open collaboration, open standards, and open source

“We reject kings, presidents and voting. We believe in rough consensus and running code.” - Dave Clark

Effective research requires:

  • Choosing interesting problems to solve
  • Popularizing the ideas discovered while studying the solution space
  • Implementing the ideas at scale to show they’re effective

Cloudflare’s massive popularity puts us in a very privileged position. We can research, implement and deploy experiments at a scale that simply can’t be done by most organizations. This makes Cloudflare an attractive research partner for universities and other research institutions who have domain knowledge but not data. We rely on our own expertise along with that of peers in both academia and industry to decide which problems to tackle in order to achieve common goals and make new scientific progress. Our middlebox detection project, proposed by researchers at the University of Michigan, is an example of such a problem.

We’re not purists who are only interested in pursuing our own ideas. Some interesting problems have already been solved, but the solution isn’t widely known or implemented. In this situation, we contribute our efforts to help elevate the best ideas and make them available to the public in an accessible way. Our early work popularizing elliptic curves on the Internet is such an example.

Popularizing an idea and implementing the idea at scale are two different things. Along with popularizing winning ideas, we want to ensure these ideas stick and provide benefits to Internet users. To promote the widespread deployment of useful ideas, we work on standards and deploy newly emerging standards early on. Doing so helps the industry easily adopt innovations and supports interoperability. For example, the work done for Crypto Week 2019 has helped the development of international technical standards. Aspects of the League of Entropy are now being standardized at the CFRG, Roughtime is now being considered for adoption as an IETF standard, and we are presenting our post-quantum results as part of NIST’s post-quantum cryptography standardization effort.

Open source software is another key aspect of scaling the implementation of an idea. We open source associated code whenever possible. The research team collaborates with the wider research world as well as internally with other teams at Cloudflare.

Focus areas going forward

Doing research, sharing it in an accessible way, working with top experts to validate it, and working on standardization has several benefits. It provides an opportunity to educate the public, further scientific understanding, and improve the state of the art; but it’s also a great way to attract candidates. Great engineers want to work on interesting projects and great researchers want to see their work have an impact. This hybrid research approach is attractive to both types of candidates.

Computer science is a vast arena, so the areas we’re currently focusing on are:

  • Security and privacy
  • Cryptography
  • Internet measurement
  • Low-level networking and operating systems
  • Emerging networking paradigms

Here are some highlights of publications we’ve co-authored over the last few years in these areas. We’ll be building on this tradition going forward.

And by the way, we’re hiring!

Product Management
Help the research team explore the future of peer-to-peer systems by building and managing projects like the Distributed Web Gateway.

Engineering
Engineering Manager (San Francisco, London)
Systems Engineer - Cryptography Research (San Francisco)
Cryptography Research Engineer Internship (San Francisco, London)

If none of these fit you perfectly, but you still want to reach out, send us an email at: research@cloudflare.com.

08:00

Congratulations! You finally have the 10Mbps you're legally entitled to. Too bad that's obsolete [The Register]

UK.gov policy slammed for not keeping pace with technology

Plans to introduce a legal right for everyone in the UK to have minimum broadband speeds of 10Mbps next year will be "obsolete soon after introduction", a Parliamentary report has found.…

07:20

Flying priests crop-dust Russian citizens with holy water to make them stop boozing and bonking [The Register]

Social afflictions solved by waving an 'inexhaustible chalice' in your face

Orthodox priests in the central Russian city of Tver have been practising an original method of ridding locals of alcohol abuse and fornication: grab some religious relics, jump in a bi-plane, circle overhead and pour holy water onto citizens from the skies while reciting prayers.…

07:00

AMD EPYC 7H12 Announced As New 280 Watt Processor For High Performance Computing [Phoronix]

From Rome, Italy this afternoon AMD not only announced more than 100 world records have been broken with their new EPYC "Rome" processors, but there is also a new SKU! Meet the EPYC 7H12...

06:47

MPs call for 'immediate' stop to facial recog in UK as report underlines bias risks in 'pre-crime' algos used by coppers [The Register]

New report after 12 forces across England and Wales trialled technology

MPs across parties have called for an immediate "stop" to live facial recognition surveillance by the police and in public places.…

06:10

Robot Rin Tin Tin can rescue you from that collapsed mine shaft [The Register]

It might even hand you the fire extinguisher

An autonomous dog-like robot designed to scamper down tunnels on search-and-rescue missions was put to the test in the most recent bout of US military boffins' DARPA Subterranean Challenge.…

05:34

You know SAP's doing a great job when a third of German users say they 'have no confidence in it' [The Register]

Savage

SAP's German customers are using a meeting in Nuremberg to complain the company should be doing more to make its products usable.…

05:03

Linux 5.4 Power Management Updates Sent In But Without AMD CPPC Changes [Phoronix]

The Linux 5.4 power management changes have been submitted for this next version of the Linux kernel...

05:01

Microsoft to improve Azure networking with private links to multi-tenant services [The Register]

Preview of private endpoints accessible both in the cloud and on premises

Microsoft has pulled the sheets off Azure Private Link as a way to create a private endpoint for a shared service.…

04:41

Valve's ACO Shader Compiler Under Review For The Mesa Radeon Vulkan Driver [Phoronix]

The RADV "ACO" shader compiler announced by Valve back in July for the fastest compilation speeds and best possible code generation may soon be hitting mainline Mesa for the open-source AMD Linux graphics stack...

04:27

How to break out of a hypervisor: Abuse Qemu-KVM on-Linux pre-5.3 – or VMware with an AMD driver [The Register]

Pair of bug reports show how VM escapes put servers at risk

A pair of newly disclosed security flaws could allow malicious virtual machine guests to break out of their hypervisor's walled gardens and execute malicious code on the host box.…

03:45

Your ugly mug may be scanned yet again – but at least you'll be able to board faster at Gatwick [The Register]

Brit airport to extend facial recog after easyJet trial

Gatwick Airport will extend its use of facial recognition to match passengers to their passports at departure gates before they board planes.…

03:09

Created to mimic Heroku: Cloud Foundry explained by its chief technology officer [The Register]

The past present and future of a confusing platform

Interview  The development experience may be easy, but the open-source Cloud Foundry (CF) platform is confusing as hell for newcomers. Chip Childers, CTO of the Cloud Foundry Foundation since it was formed in January 2015, spoke to The Reg about its past, present and future, at the recent Cloud Foundry Summit in The Hague.…

02:40

Linux 5.4 Preps For Intel Tiger Lake, Elkhart Lake & Lightning Mountain + Killing MPX [Phoronix]

The Linux 5.4 x86/cpu changes are as busy as always on the Intel side...

02:08

If Syria pioneered grain processing by watermill in 350BC, the UK in 2019 can do better... right? [The Register]

Wrong: Biz committee bemoans lack of automation strategy

The UK government needs to come up with an actual strategy to help businesses and workers take advantage of automation and robotics.…

00:56

Revealed: The 25 most dangerous software bug types – mem corruption, so hot right now [The Register]

Tired: SQLi. Expired: Format string exploits. Hired: Anyone who can port code from C/C++

On Tuesday, the Common Weakness Enumeration (CWE) team from MITRE, a non-profit focused on information security for government, industry and academia, published its list of the CWE Top 25 Most Dangerous Software Errors.…

00:07

This image-recognition roulette is all fun and games... until it labels you a rape suspect, divorcee, or a racial slur [The Register]

If we could stop teaching AI insults, that would be great

Netizens are merrily slinging selfies and other photos at an online neural network to classify them... and the results aren’t pretty.…

Tuesday, 17 September

23:08

AMD Linux Driver's LRU Bulk Moves Can Be A Big Help For Demanding Linux Games [Phoronix]

Sadly not currently queued as a fix for the Linux 5.4 kernel, re-enabling the LRU bulk moves functionality can be a significant boost for helping with the Radeon graphics driver performance for Linux gaming...

22:53

You can trust us to run a digital currency – we're Facebook: Exec begs Europe not to ban Libra [The Register]

His persuasive argument? You’re wrong – we know better about this money stuff

The Facebook exec in charge of its Libra cryptocurrency effort has sought to assure European governments that their fears are unfounded… by telling them they’re wrong and Facebook knows better.…

22:09

Improved Fscrypt Sent In For Linux 5.4 To Offer Better Native File Encryption Handling [Phoronix]

In addition to submitting the FS-VERITY file authentication code for Linux 5.4, Google's Eric Biggers has sent out his big update to the fscrypt file encryption framework for this next kernel revision...

20:51

Not to over-hype this storage chip tech, but if I could get away with calling my first-born '3D NAND', I totally would [The Register]

Let's take a quick tour under the hood – and spell out why your biz needs this

Comment  Anyone who’s ever watched the original Star Trek series will probably remember Spock and Kirk playing three-dimensional chess – a great chance for the Enterprise’s science officer to show off his prowess with logic as he contemplated a range of complex moves.…

18:16

NVIDIA Bringing Up Open-Source Volta GPU Support For Their Xavier SoC [Phoronix]

While NVIDIA doesn't contribute much open-source Linux driver code as it concerns their desktop GPUs (though they have been ramping up documentation), when it comes to Tegra/embedded is where they have contributed improvements and new hardware support to Nouveau and associated driver code in the past several years. NVIDIA's open-source Tegra/embedded contributions come as a result of customer demand/requirements. Their latest work is preparing to finally bring-up the "GV11B" Volta graphics found within last year's Tegra Xavier SoC...

18:00

15:25

Scott McNealy gets touchy feely with Trump: Sun cofounder hosts hush-hush reelection fundraiser for President [The Register]

Commander-in-Chief jets into Silicon Valley under cloud of secrecy

The mystery host of a Silicon Valley fundraiser for President Trump today has been revealed as Scott McNealy, co-founder and former CEO of Sun Microsystems.…

14:46

Radeon Navi 12/14 Open-Source Driver Support Now Being Marked As "Experimental" [Phoronix]

In an interesting change of course, the open-source driver support for AMD Radeon Navi 12 and Navi 14 GPUs is being flagged as experimental and hidden behind a feature flag...

13:58

US government sues ex-IT guy for breaking his NDA (Yes, we mean Edward Snowden) [The Register]

Uncle Sam tries to plug leaker's pay, ends up plugging leaker's book

The US government today sued former CIA employee and NSA sysadmin contractor Edward Snowden to deny him payment from his newly published book, Permanent Record.…

13:44

We asked for your Fitbit horror stories and, oh wow, did you deliver: Readers sync their teeth into 'junk' gizmos [The Register]

'This is the last Fitbit I will buy'

Yesterday El Reg wrote about the frustrating syncing failures plaguing FitBit gadgets over the past four or so weeks.…

13:15

Seriously, this sh!t again? 24m medical records, 700m+ scan pics casually left online [The Register]

Whole pile of US data just sitting there with no security

Around 24 million medical patients' data is floating around on the internet, freely available for all to pore over – thanks to that good old common factor, terribly insecure servers.…

12:30

DevOps darling GitLab pockets another $268m to be valued at $2.75bn [The Register]

Drops Enterprise (Core) into VMware Cloud Marketplace, misspells own name

DevOps botherer GitLab has scored another $268m of funding, bringing the value of the outfit to $2.75bn ahead of a 2020 IPO.…

11:45

Vulns out of the box: 12 in 13 small biz network devices terribly insecure by default – research [The Register]

You want root shell access? No problem

A new report has suggested that 12 out of 13 network devices, such as routers and network-attached storage appliances, are vulnerable to hacks that enable "root-privileged access without any authentication".…

11:26

HIPCL Lets CUDA Run On OpenCL+SPIR-V [Phoronix]

Based off AMD's GPUOpen HIP as part of their ROCm stack, researchers at Tampere University in Finland have created HIPCL as leveraging HIP as well as POCL for routing CUDA codes to run on any hardware supporting OpenCL+SPIR-V...

11:04

Apple tells European Commission it's nutty for slapping €13bn tax bill on Irish subsidiary [The Register]

Sweetheart deal crackdown 'defies reality and common sense' apparently

Apple has appealed against the European Union's 2016 decision to impose a €13bn tax bill on the iPhone maker's Irish subsidiary.…

10:06

Mozilla Shifting Firefox To A Four-Week Release Cycle [Phoronix]

Mozilla announced today they are tightening up the Firefox release cycle even more... Expect to see new Firefox releases monthly...

10:00

VMware on AWS: Low-risk option or security blanket for those who don't like change? [The Register]

John Enoch gives us the hard sell – just ignore the price

AWS Transformation Day  It's London's turn with AWS Transformation Day, where attendees endure a cacophony of buzzwords intended to hammer home the message that Amazon's cloud is where you wanna be.…

09:55

How We Design Features for Wrangler, the Cloudflare Workers CLI [The Cloudflare Blog]

How We Design Features for Wrangler, the Cloudflare Workers CLI
How We Design Features for Wrangler, the Cloudflare Workers CLI

The most recent update to Wrangler, version 1.3.1, introduces important new features for developers building Cloudflare Workers — from built-in deployment environments to first class support for Workers KV. Wrangler is Cloudflare’s first officially supported CLI. Branching into this field of software has been a novel experience for us engineers and product folks on the Cloudflare Workers team.

As part of the 1.3.1 release, the folks on the Workers Developer Experience team dove into the thought process that goes into building out features for a CLI and thinking like users. Because while we wish building a CLI were as easy as our teammate Avery tweeted...

… it brings design challenges that many of us have never encountered. To overcome these challenges successfully requires deep empathy for users across the entire team, as well as the ability to address ambiguous questions related to how developers write Workers.

Wrangler, meet Workers KV

Our new KV functionality introduced a host of new features, from creating KV namespaces to bulk uploading key-value pairs for use within a Worker. This new functionality primarily consisted of logic for interacting with the Workers KV API, meaning that the technical work under “the hood” was relatively straightforward. Figuring out how to cleanly represent these new features to Wrangler users, however, became the fundamental question of this release.

Designing the invocations for new KV functionality unsurprisingly required multiple iterations, and taught us a lot about usability along the way!

Attempt 1

For our initial pass, the path originally seemed so obvious. (Narrator: It really, really wasn’t). We hypothesized that having Wrangler support familiar commands — like ls and rm — would be a reasonable mapping of familiar command line tools to Workers KV, and ended up with the following set of invocations below:

# creates a new KV Namespace
$ wrangler kv add myNamespace									
	
# sets a string key that doesn't expire
$ wrangler kv set myKey=”someStringValue”

# sets many keys
$ wrangler kv set myKey=”someStringValue” myKey2=”someStringValue2” ...

# sets a volatile (expiring) key that expires in 60 s
$ wrangler kv set myVolatileKey=path/to/value --ttl 60s

# deletes three keys
$ wrangler kv rm myNamespace myKey1 myKey2 myKey3

# lists all your namespaces
$ wrangler kv ls

# lists all the keys for a namespace
$ wrangler kv ls myNamespace

# removes all keys from a namespace, then removes the namespace		
$ wrangler kv rm -r myNamespace

While these commands invoked familiar shell utilities, they made interacting with your KV namespace a lot more like interacting with a filesystem than a key value store. The juxtaposition of a well-known command like ls with a non-command, set, was confusing. Additionally, mapping preexisting command line tools to KV actions was not a good 1-1 mapping (especially for rm -r; there is no need to recursively delete a KV namespace like a directory if you can just delete the namespace!)

This draft also surfaced use cases we needed to support: namely, we needed support for actions like easy bulk uploads from a file. This draft required users to enter every KV pair in the command line instead of reading from a file of key-value pairs; this was also a non-starter.

Finally, these KV subcommands caused confusion about actions to different resources. For example, the command for listing your Workers KV namespaces looked a lot like the command for listing keys within a namespace.

Going forward, we needed to meet these newly identified needs.

Attempt 2

Our next attempt shed the shell utilities in favor of simple, declarative subcommands like create, list, and delete. It also addressed the need for easy-to-use bulk uploads by allowing users to pass a JSON file of keys and values to Wrangler.

# create a namespace
$ wrangler kv create namespace <title>

# delete a namespace
$ wrangler kv delete namespace <namespace-id>

# list namespaces
$ wrangler kv list namespace

# write key-value pairs to a namespace, with an optional expiration flag
$ wrangler kv write key <namespace-id> <key> <value> --ttl 60s

# delete a key from a namespace
$ wrangler kv delete key <namespace-id> <key>

# list all keys in a namespace
$ wrangler kv list key <namespace-id>

# write bulk kv pairs. can be json file or directory; if dir keys will be file paths from root, value will be contents of files
$ wrangler kv write bulk ./path/to/assets

# delete bulk pairs; same input functionality as above
$ wrangler kv delete bulk ./path/to/assets

Given the breadth of new functionality we planned to introduce, we also built out a taxonomy of new subcommands to ensure that invocations for different resources — namespaces, keys, and bulk sets of key-value pairs — were consistent:

How We Design Features for Wrangler, the Cloudflare Workers CLI

Designing invocations with taxonomies became a crucial part of our development process going forward, and gave us a clear look at the “big picture” of our new KV features.

This approach was closer to what we wanted. It offered bulk put and bulk delete operations that would read multiple key-value pairs from a JSON file. After specifying an action subcommand (e.g. delete), users now explicitly stated which resource an action applied to (namespace , key, bulk) and reduced confusion about which action applied to which KV component.

This draft, however, was still not as explicit as we wanted it to be. The distinction between operations on namespaces versus keys was not as obvious as we wanted, and we still feared the possibility of different delete operations accidentally producing unwanted deletes (a possibly disastrous outcome!)

Attempt 3


We really wanted to help differentiate where in the hierarchy of structs a user was operating at any given time. Were they operating on namespaces, keys, or bulk sets of keys in a given operation, and how could we make that as clear as possible? We looked around, comparing the ways CLIs from kubectl to Heroku’s handled commands affecting different objects. We landed on a pleasing pattern inspired by Heroku’s CLI: colon-delimited command namespacing:

plugins:install PLUGIN    # installs a plugin into the CLI
plugins:link [PATH]       # links a local plugin to the CLI for development
plugins:uninstall PLUGIN  # uninstalls or unlinks a plugin
plugins:update            # updates installed plugins

So we adopted kv:namespace, kv:key, and kv:bulk to semantically separate our commands:

# namespace commands operate on namespaces
$ wrangler kv:namespace create <title> [--env]
$ wrangler kv:namespace delete <binding> [--env]
$ wrangler kv:namespace rename <binding> <new-title> [--env]
$ wrangler kv:namespace list [--env]
# key commands operate on individual keys
$ wrangler kv:key write <binding> <key>=<value> [--env | --ttl | --exp]
$ wrangler kv:key delete <binding> <key> [--env]
$ wrangler kv:key list <binding> [--env]
# bulk commands take a user-generated JSON file as an argument
$ wrangler kv:bulk write <binding> ./path/to/data.json [--env]
$ wrangler kv:bulk delete <binding> ./path/to/data.json [--env]

And ultimately ended up with this topology:

How We Design Features for Wrangler, the Cloudflare Workers CLI

We were even closer to our desired usage pattern; the object acted upon was explicit to users, and the action applied to the object was also clear.

There was one usage issue left. Supplying namespace-ids--a field that specifies which Workers KV namespace to perform an action to--required users to get their clunky KV namespace-id (a string like 06779da6940b431db6e566b4846d64db) and provide it in the command-line under the namespace-id option. This namespace-id value is what our Workers KV API expects in requests, but would be cumbersome for users to dig up and provide, let alone frequently use.

The solution we came to takes advantage of the wrangler.toml present in every Wrangler-generated Worker. To publish a Worker that uses a Workers KV store, the following field is needed in the Worker’s wrangler.toml:

kv-namespaces = [
	{ binding = "TEST_NAMESPACE", id = "06779da6940b431db6e566b4846d64db" }
]

This field specifies a Workers KV namespace that is bound to the name TEST_NAMESPACE, such that a Worker script can access it with logic like:

TEST_NAMESPACE.get(“my_key”);

We also decided to take advantage of this wrangler.toml field to allow users to specify a KV binding name instead of a KV namespace id. Upon providing a KV binding name, Wrangler could look up the associated id in wrangler.toml and use that for Workers KV API calls.

Wrangler users performing actions on KV namespaces could simply provide --binding TEST_NAMESPACE for their KV calls let Wrangler retrieve its ID from wrangler.toml. Users can still specify --namespace-id directly if they do not have namespaces specified in their wrangler.toml.

Finally, we reached our happy point: Wrangler’s new KV subcommands were explicit, offered functionality for both individual and bulk actions with Workers KV, and felt ergonomic for Wrangler users to integrate into their day-to-day operations.

Lessons Learned

Throughout this design process, we identified the following takeaways to carry into future Wrangler work:

  1. Taxonomies of your CLI’s subcommands and invocations are a great way to ensure consistency and clarity. CLI users tend to anticipate similar semantics and workflows within a CLI, so visually documenting all paths for the CLI can greatly help with identifying where new work can be consistent with older semantics. Drawing out these taxonomies can also expose missing features that seem like a fundamental part of the “big picture” of a CLI’s functionality.
  2. Use other CLIs for inspiration and sanity checking. Drawing logic from popular CLIs helped us confirm our assumptions about what users like, and learn established patterns for complex CLI invocations.
  3. Avoid logic that requires passing in raw ID strings. Testing CLIs a lot means that remembering and re-pasting ID values gets very tedious very quickly. Emphasizing a set of purely human-readable CLI commands and arguments makes for a far more intuitive experience. When possible, taking advantage of configuration files (like we did with wrangler.toml) offers a straightforward way to provide mappings of human-readable names to complex IDs.

We’re excited to continue using these design principles we’ve learned and documented as we grow Wrangler into a one-stop Cloudflare Workers shop.

If you’d like to try out Wrangler, check it out on GitHub and let us know what you think! We would love your feedback.

How We Design Features for Wrangler, the Cloudflare Workers CLI

09:25

HP printer small print says kit phones home data on whatever you print – and then some [The Register]

Security engineer actually reads privacy policy to his horror

Hewlett-Packard Inc's printers don't just slurp the contents of your wallet at a frightening rate. They also guzzle a surprising amount of data on you and whatever you're printing.…

09:02

AMD EPYC 7302 / 7402 / 7502 / 7742 Linux Performance Benchmarks [Phoronix]

Last month we provided launch-day benchmarks of the AMD EPYC 7502 and 7742 under Linux in both 1P and 2P configurations for these exciting "Rome" Zen 2 server processors. For your viewing pleasure today is a fresh look at not only the EPYC 7502 and 7742 processors under the latest Linux 5.3 kernel but we've also expanded it to looking at the EPYC 7302 and EPYC 7402 processors as well with those processors recently being sent over by AMD. Under Ubuntu 19.04 with Linux 5.3, these four different AMD EPYC 7002 series SKUs were benchmarked along with some of the older AMD Naples processors and Intel Xeon Gold/Platinum processors for a fresh look at the Linux server performance.

08:57

CentOS 7.7 Released As The Last Stop Before CentOS 8.0 [Phoronix]

CentOS 8.0 is coming next week as the long-awaited community rebuild of Red Hat Enterprise Linux 8.0. But for those currently maintaining CentOS 7 / EL7, CentOS 7.7 is out today...

08:50

.NET Core 3.0 thought it was all ready for release. And it would have been too, if it weren't for those pesky Visual Studio kids [The Register]

Hi, remember us? We share a toolset. And have another preview to do?

Having promised there wouldn't be any more previews, Microsoft has dropped a release candidate for the upcoming .NET Core 3.0 framework.…

08:28

Fedora 31 Beta Released With GNOME 3.34, Guts i686 Hardware Support [Phoronix]

Fedora 31 beta has released on time! It's not only on-time but it's also coming with many exciting updates...

08:19

The 32-Bit Packages That Will Continue To Be Supported Through Ubuntu 20.04 LTS [Phoronix]

Earlier this year Canonical announced they would be pulling 32-bit support from Ubuntu ahead of next year's 20.04 LTS. But following public backlash, they stepped back to provide 32-bit support for select packages. Today they announced the 199 32-bit packages that will continue to be supported through Ubuntu 20.04 LTS...

08:16

Brit government WLTM one Chief Digi Info Officer [The Register]

Required: GSoH, plus ability to make ends meet on up to £180k a year

UK.gov is on the lookout for a Government Chief Digital Information Officer (GCDIO) – a permanent secretary role that sets the strategic direction of travel for public sector IT in return for up to £180,000 a year.…

07:47

Announcing the release of Fedora 31 Beta [Fedora Magazine]

The Fedora Project is pleased to announce the immediate availability of Fedora 31 Beta, the next step towards our planned Fedora 31 release at the end of October.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Beta Release Highlights

GNOME 3.34 (almost)

The newest release of the GNOME desktop environment is full of performance enhancements and improvements. The beta ships with a prerelease, and the full 3.34 release will be available as an update. For a full list of GNOME 3.34 highlights, see the release notes.

Fedora IoT Edition

Fedora Editions address specific use-cases the Fedora Council has identified as significant in growing our userbase and community. We have Workstation, Server, and CoreOS — and now we’re adding Fedora IoT. This will be available from the main “Get Fedora” site when the final release of F31 is ready, but for now, get it from iot.fedoraproject.org.

Read more about Fedora IoT in our Getting Started docs.

Fedora CoreOS

Fedora CoreOS remains in a preview state, with a planned generally-available release planned for early next year. CoreOS is a rolling release which rebases periodically to a new underlying Fedora OS version. Right now, that version is Fedora 30, but soon there will be a “next” stream which will track Fedora 31 until that’s ready to become the “stable” stream.

Other updates

Fedora 31 Beta includes updated versions of many popular packages like Node.js, the Go language, Python, and Perl. We also have the customary updates to underlying infrastructure software, like the GNU C Library and the RPM package manager. For a full list, see the Change set on the Fedora Wiki.

Farewell to bootable i686

We’re no longer producing full media or repositories for 32-bit Intel-architecture systems. We recognize that this means newer Fedora releases will no longer work on some older hardware, but the fact is there just hasn’t been enough contributor interest in maintaining i686, and we can provide greater benefit for the majority of our users by focusing on modern architectures. (The majority of Fedora systems have been 64-bit x86_64 since 2013, and at this point that’s the vast majority.)

Please note that we’re still making userspace packages for compatibility when running 32-bit software on a 64-bit systems — we don’t see the need for that going away anytime soon.

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on Freenode. As testing progresses, common issues are tracked on the Common F31 Bugs page.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole.

More information

For more detailed information about what’s new on Fedora 31 Beta release, you can consult the Fedora 31 Change set. It contains more technical information about the new packages and improvements shipped with this release.

07:30

UK Home Office web form snafu allows you to both agree and disagree – strongly – all at once [The Register]

Government cares what you think. Honest

A UK Home Office consultation on new, intrusive police powers was so incompetently written that you could both "strongly agree" and "strongly disagree" at the same time when answering its questions.…

07:00

Neural networks. Sparse data. TensorFlow. PyTorch. Text mining. Ethics – and lots more. We've got every angle of AI covered at MCubed [The Register]

Join us and our awesome speakers for a hearty no-hype pure-tech deep dive

Event  Whether you’re worried about the machines taking over, or think it can’t happen soon enough, you should get yourselves down to MCubed at the end of the month.…

06:50

Phoronix Test Suite 9.0 Released With New Result Viewer, Offline/Enterprise Benchmarking Enhancements [Phoronix]

Phoronix Test Suite 9.0 is now available as the latest quarterly feature release to our cross-platform, open-source automated benchmarking framework. With Phoronix Test Suite 9.0 comes a rewritten result viewer to offer more result viewing functionality previously only exposed locally via the command-line or through a Phoromatic Server (or OpenBenchmarking.org when results are uploaded), new offline/enterprise usage improvements, various hardware/software detection enhancements on different platforms, and a variety of other additions.

06:33

NASA's lunar spy looks for hide-and-seek champ Vikram, Starliner test success, and more [The Register]

Happy 43rd birthday to Space Shuttle Enterprise

Roundup  Unlike SpaceX's Crew Dragon, which plops down in the ocean at the end of a mission (ideally in one piece), Boeing's CST-100 Starliner is designed to land on, er, land. As NASA and Boeing inch ever closer to its first crewed launch, rehearsals were conducted last week to practice locating a capsule, safing it and preparing for hatch opening.…

06:15

Saturday Morning Breakfast Cereal - Duuude [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I need this concept to be better-known because it has AT LEAST as much humor potential as that whole thing about the cat in the box.


Today's News:

05:45

Google age discrimination case: Supervisor called me 'grandpa', engineer claims [The Register]

Suit filed alleging HR failed to protect staffer from harassment

Google has been hit by another age discrimination lawsuit, just two months after the search giant settled a previous case brought by over 200 people.…

05:00

UK.gov's smart meter cost-benefit analysis for 2019 goes big on cost, easy on the benefits [The Register]

Did someone mention a delay? Rollout given another 4 years as price tag soars to £13.4bn

The UK government has confirmed that electricity suppliers have an extra four years to hit targets for installing smart meters.…

04:23

Microsoft Makes Their C++ Standard Library Open-Source (STL) [Phoronix]

Microsoft has begun their next open-source expedition by open-sourcing an important piece of MSVC / Visual Studio... STL, their C++ standard library...

04:04

You look like a fungi. Got mushroom in your life to build stuff with mycelium computers? [The Register]

IRL Star Trek: Discovery, sort of

The Unconventional Computing Laboratory in Bristol is looking for a research associate to help it create buildings with embedded fungus-created computers.…

03:47

Richard Stallman Resigns From The Free Software Foundation [Phoronix]

Richard M Stallman has resigned as president from the Free Software Foundation and from his Board of Directors post...

03:39

Linux 5.4 Continues Sound Open Firmware, Improvements For AMD/NVIDIA HDMI Audio [Phoronix]

Linux 5.4 will sound better. Well, at least provide audio support on more hardware with this next kernel release thanks to the latest batch of open-source sound improvements...

03:05

Disney signs on with Microsoft, SQLCMD arrives in Data Studio and Azure goes German [The Register]

Heigh-ho, Heigh-ho, it's off to test we go...

Roundup  While the speculation machine for Microsoft's mystery hardware event ramped up (although still a mere ripple compared to the spurtings around anything to do with Apple), the Redmond gang continued to toil. Here are some of the stories you might have missed.…

02:03

First they came for 'face' and I did not speak out because I... have no face? Then they came for 'book' [The Register]

Don't panic: Off and f*ck still free from Zuckerberg, for now

Facebook has applied to trademark the word "book" in Europe.…

01:42

GhostBSD 19.09 Provides A Good BSD Desktop Built Off TrueOS & FreeBSD 12 [Phoronix]

TrueOS changing direction was a disappointment back in 2018 with having done away with their desktop version that had been around for years since formerly being known as PC-BSD. But at least there are a few viable alternatives that continue advancing for a nice out-of-the-box BSD desktop experience like GhostBSD and MidnightBSD...

01:30

Is it time to update your data warehouse and retool your analytics? Google Cloud's gurus are here to guide you [The Register]

Get the answers you need this month – and ready your systems for the 2020s

Promo  If you are beginning to wonder whether your familiar old data warehouse and analytics solutions can keep pace with the fast-moving modern world, you should check out today's state-of-the-art data-handling and analytics systems.…

00:58

Boffins build AI that can detect cyber-abuse – and if you don't believe us, YOU CAN *%**#* *&**%* #** OFF [The Register]

Alternatively, you can try to overpower it with your incredibly amazing sarcasm

Trolls, morons, and bots plaster toxic crap all over Twitter and other antisocial networks. Can machine learning help clean it up?…

Monday, 16 September

23:33

Linux 5.4 Adds Qualcomm Snapdragon 855, Supports Some Newer ARM Laptops [Phoronix]

The ARM SoC platform and driver changes landed on Monday during the first full day of the Linux 5.4 merge window. There is some exciting ARM hardware support improvements for this kernel while doing away with some older platforms...

23:11

Stallman's final interview as FSF president: Last week we quizzed him over Microsoft visit. Now he quits top roles amid rape remarks outcry [The Register]

GNU man resigns after Minsky email defense 'the final straw' for dev world

Interview  Shortly after The Register learned that Richard Stallman, founder and then president of the Free Software Foundation and creator of the GNU Project, had been invited to speak at Microsoft's corporate headquarters, we emailed him to ask about the apparent incongruity of advocating for software freedom at a company singled out by the FSF as a maker of malware.…

22:08

Linux 5.4 Dropping Support For The Itanium IA64-Powered SGI Altix [Phoronix]

While Linux 5.4 is bringing a new driver to help SGI systems back to their Origin boxes, this kernel meanwhile dropping support for the SGI Altix that is newer than the some of the Origin systems. SGI Altix being removed from the Linux kernel is the latest in the path for winding down Itanium (IA64) support...

19:13

Larry Ellison tiers Amazon a new one: Oracle cloud gets 'always' free offer, plus something about Linux [The Register]

El Reg decodes Big Red's big announcements from today

OpenWorld  Oracle on Monday debuted a free, self-fixing Linux distribution for paying Oracle Cloud customers, and a free Cloud service tier that includes a limited version of its paid Autonomous Database, for winning developer favor and fostering future Cloud customers.…

18:36

IBM looks to boost sales the same way it has for 65 years – yes, it's a new mainframe: The z15 [The Register]

Lineup looks to put a pep in the step of flailing systems group

IBM this month officially unveiled the newest addition to its Z-series mainframe lineup in roughly two years.…

16:35

RHEL8-Based CentOS 8.0 Slated To Be Released Next Week [Phoronix]

It looks like CentOS 8.0 as the community and cost-free re-spin of Red Hat Enterprise Linux 8.0 will finally ship next week...

15:49

Linux 5.4 Scheduler Changes Bring Better AMD EPYC Load Balancing, Other Optimizations [Phoronix]

The Linux 5.4 scheduler changes are fairly exciting on multiple fronts...

15:04

PulseAudio 13.0 Released With Dolby TrueHD and DTS-HD Master Audio Support [Phoronix]

While PipeWire may be seeing a lot of investment by Red Hat for improving audio/video streams on Linux, PulseAudio isn't letting up yet as the de facto Linux desktop sound server. Quietly released last week was PulseAudio 13.0 as the newest feature update and their first big update in some fifteen months...

12:46

A Look At The Speedy Clear Linux Boot Time Versus Ubuntu 19.10 [Phoronix]

Given the interest last week in how Clear Linux dropped their kernel boot time from 3 seconds to 300 ms, here are some fresh boot time benchmarks of Clear Linux compared to Ubuntu 19.10 on both Intel and AMD hardware...

10:24

Vulkan 1.1.123 Released With Two New Extensions [Phoronix]

Vulkan 1.1.123 is the latest weekly update to this high performance graphics API and it's formally introducing two more extensions...

08:00

Saturday Morning Breakfast Cereal - Vampire [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Come to think of it, probably any mythological creature has some legitimate grievances with humanity.


Today's News:

02:00

Copying large files with Rsync, and some misconceptions [Fedora Magazine]

There is a notion that a lot of people working in the IT industry often copy and paste from internet howtos. We all do it, and the copy-and-paste itself is not a problem. The problem is when we run things without understanding them.

Some years ago, a friend who used to work on my team needed to copy virtual machine templates from site A to site B. They could not understand why the file they copied was 10GB on site A but but it became 100GB on-site B.

The friend believed that rsync is a magic tool that should just “sync” the file as it is. However, what most of us forget is to understand what rsync really is, and how is it used, and the most important in my opinion is, where it come from. This article provides some further information about rsync, and an explanation of what happened in that story.

About rsync

rsync is a tool was created by Andrew Tridgell and Paul Mackerras who were motivated by the following problem:

Imagine you have two files, file_A and file_B. You wish to update file_B to be the same as file_A. The obvious method is to copy file_A onto file_B.

Now imagine that the two files are on two different servers connected by a slow communications link, for example, a dial-up IP link. If file_A is large, copying it onto file_B will be slow, and sometimes not even possible. To make it more efficient, you could compress file_A before sending it, but that would usually only gain a factor of 2 to 4.

Now assume that file_A and file_B are quite similar, and to speed things up, you take advantage of this similarity. A common method is to send just the differences between file_A and file_B down the link and then use such list of differences to reconstruct the file on the remote end.

The problem is that the normal methods for creating a set of differences between two files rely on being able to read both files. Thus they require that both files are available beforehand at one end of the link. If they are not both available on the same machine, these algorithms cannot be used. (Once you had copied the file over, you don’t need the differences). This is the problem that rsync addresses.

The rsync algorithm efficiently computes which parts of a source file match parts of an existing destination file. Matching parts then do not need to be sent across the link; all that is needed is a reference to the part of the destination file. Only parts of the source file which are not matching need to be sent over.

The receiver can then construct a copy of the source file using the references to parts of the existing destination file and the original material.

Additionally, the data sent to the receiver can be compressed using any of a range of common compression algorithms for further speed improvements.

The rsync algorithm addresses this problem in a lovely way as we all might know.

After this introduction on rsync, Back to the story!

Problem 1: Thin provisioning

There were two things that would help the friend understand what was going on.

The problem with the file getting significantly bigger on the other size was caused by Thin Provisioning (TP) being enabled on the source system — a method of optimizing the efficiency of available space in Storage Area Networks (SAN) or Network Attached Storages (NAS).

The source file was only 10GB because of TP being enabled, and when transferred over using rsync without any additional configuration, the target destination was receiving the full 100GB of size. rsync could not do the magic automatically, it had to be configured.

The Flag that does this work is -S or –sparse and it tells rsync to handle sparse files efficiently. And it will do what it says! It will only send the sparse data so source and destination will have a 10GB file.

Problem 2: Updating files

The second problem appeared when sending over an updated file. The destination was now receiving just the 10GB, but the whole file (containing the virtual disk) was always transferred. Even when a single configuration file was changed on that virtual disk. In other words, only a small portion of the file changed.

The command used for this transfer was:

rsync -avS vmdk_file syncuser@host1:/destination

Again, understanding how rsync works would help with this problem as well.

The above is the biggest misconception about rsync. Many of us think rsync will simply send the delta updates of the files, and that it will automatically update only what needs to be updated. But this is not the default behaviour of rsync.

As the man page says, the default behaviour of rsync is to create a new copy of the file in the destination and to move it into the right place when the transfer is completed.

To change this default behaviour of rsync, you have to set the following flags and then rsync will send only the deltas:

--inplace               update destination files in-place
--partial               keep partially transferred files
--append                append data onto shorter files
--progress              show progress during transfer

So the full command that would do exactly what the friend wanted is:

rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination

Note that the sparse flag -S had to be removed, for two reasons. The first is that you can not use –sparse and –inplace together when sending a file over the wire. And second, when you once sent a file over with –sparse, you can’t updated with –inplace anymore. Note that versions of rsync older than 3.1.3 will reject the combination of –sparse and –inplace.

So even when the friend ended up copying 100GB over the wire, that only had to happen once. All the following updates were only copying the difference, making the copy to be extremely efficient.

Sunday, 15 September

11:15

Saturday Morning Breakfast Cereal - Soap Opera [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Sorry for the slow update. I use Viasat internet, so I'm currently updating from the parking lot of a Starbucks.


Today's News:

Saturday, 14 September

09:55

Saturday Morning Breakfast Cereal - Note [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
The version of this that originally went out on patreon had the speech bubble tail point at the wrong person and I AM VERY SORRY.


Today's News:

Friday, 13 September

17:00

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today [The Cloudflare Blog]

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today
How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

Today has been a big day for Cloudflare, as we became a public company on the New York Stock Exchange (NYSE: NET). To mark the occasion, we decided to bring our favorite entropy machines to the floor of the NYSE. Footage of these lava lamps is being used as an additional seed to our entropy-generation system LavaRand — bolstering Internet encryption for over 20 million Internet properties worldwide.

(This is mostly for fun. But when’s the last time you saw a lava lamp on the trading floor of the New York Stock Exchange?)

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

A little context: generating truly random numbers using computers is impossible, because code is inherently deterministic (i.e. predictable). To compensate for this, engineers draw from pools of randomness created by entropy generators, which is a fancy term for "things that are truly unpredictable".

It turns out that lava lamps are fantastic sources of entropy, as was first shown by Silicon Graphics in the 1990s. It’s a torch we’ve been proud to carry forward: today, Cloudflare uses lava lamps to generate entropy that helps make millions of Internet properties more secure.

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

Housed in our San Francisco headquarters is a wall filled with dozens of lava lamps, undulating with mesmerizing randomness. We capture these lava lamps on video via a camera mounted across the room, and feed the resulting footage into an algorithm — called LavaRand — that amplifies the pure randomness of these lava lamps to dizzying extremes (computers can't create seeds of pure randomness, but they can massively amplify them).

Shortly before we rang the opening bell this morning, we recorded footage of our lava lamps in operation on the trading room floor of the New York Stock Exchange, and we're ingesting the footage into our LavaRand system. The resulting entropy is mixed with the myriad additional sources of entropy that we leverage every day, creating a cryptographically-secure source of randomness — fortified by Wall Street.

How Cloudflare and Wall Street Are Helping Encrypt the Internet Today

We recently took our enthusiasm for randomness a step further by facilitating the League of Entropy, a consortium of global organizations and individual contributors, generating verifiable randomness via a globally distributed network. As one of the founding members of the League, LavaRand (pictured above) plays a key role in empowering developers worldwide with a pool of randomness with extreme entropy and high reliability.

And today, she’s enjoying the view from the podium!


One caveat: the lava lamps we run in our San Francisco headquarters are recorded in real-time, 24/7, giving us an ongoing stream of entropy. For reasons that are understandable, the NYSE doesn't allow for live video feeds from the exchange floor while it is in operation. But this morning they did let us record footage of the lava lamps operating shortly before the opening bell. The video was recorded and we're ingesting it into our LavaRand system (alongside many other entropy generators, including the lava lamps back in San Francisco).


07:51

Saturday Morning Breakfast Cereal - Skeptical [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Yes, Virginia, the Keebler Elves are real, as long as we hunger in our hearts for inexpensive baked goods.


Today's News:

07:00

A Letter from Matthew Prince and Michelle Zatlyn [The Cloudflare Blog]

A Letter from Matthew Prince and Michelle Zatlyn
Cloudflare's three co-founders: Michelle Zatlyn, Lee Holloway, and Matthew Prince
A Letter from Matthew Prince and Michelle Zatlyn

To our potential shareholders:

Cloudflare launched on September 27, 2010. Many great startups pivot over time. We have not. We had a plan and have been purposeful in executing it since our earliest days. While we are still in its early innings, that plan remains clear: we are helping to build a better Internet. Understanding the path we’ve taken to date will help you understand how we plan to operate going forward, and to determine whether Cloudflare is the right investment for you.

Cloudflare was formed to take advantage of a paradigm shift: the world was moving from on-premise hardware and software that you buy to services in the cloud that you rent. Paradigm shifts in technology always create significant opportunities, and we built Cloudflare to take advantage of the opportunities that arose as the world shifted to the cloud.

As we watched packaged software turn into SaaS applications, and physical servers migrate to instances in the public cloud, it was clear that it was only a matter of time before the same happened to network appliances. Firewalls, network optimizers, load balancers, and the myriad of other hardware appliances that previously provided security, performance, and reliability would inevitably turn into cloud services.

Network Control as a Service

We built Cloudflare to provide the suite of cloud services we anticipated customers would demand as they looked to replace their on-premise, hardware-based network appliances. That was an audacious goal and it shaped both our business model and our technical architecture in ways that we believe differentiate us and provide us with a significant competitive advantage.

For example, since we were competing with hardware manufacturers, usage-based billing never made sense for our core products. In the on-premise hardware world, when you suffered more cyber attacks you didn’t pay your firewall vendor more, and when you suffered fewer you didn’t pay them less. If we were going to build a firewall-as-a-service — or any other network appliance replacement — we needed predictable, subscription-based pricing that reflected how companies wished they could pay for their hardware.

We also knew that more data gave us an advantage no hardware appliance could match. Like an Internet-wide immune system, we could learn from all the bits of traffic that flowed through our network. We could learn not only about bad actors and how to stop their attacks, but also about good actors and how to optimize their online experiences. Since more data helped us build better products for all our customers, we never wanted to do anything to discourage any potential customer from routing any amount of traffic, large or small, through our network.

Efficiency is in Our DNA

This core tenet of serving the entire Internet forced us to obsess over costs. Efficiency is in the DNA of Cloudflare because it had to be. Being entrusted with investors’ capital is a privilege and we make investments in our business always with a mind toward being good stewards of that capital. Moreover, while it was tempting to just pass along costs like bandwidth to our customers, we knew if we were going to provide a compelling value proposition against hardware we needed to be ruthlessly efficient.

To achieve the level of efficiency needed to compete with hardware appliances required us to invent a new type of platform. That platform needed to be built on commodity hardware. It needed to be architected so any server in any city that made up Cloudflare’s network could run every one of our services. It also needed the flexibility to move traffic around to serve our highest paying customers from the most performant locations while serving customers who paid us less, or even nothing at all, from wherever there was excess capacity.

We built Cloudflare’s platform from the ground up with a full understanding of our audacious plan: to literally help build a better Internet. We didn’t run separate networks to provide our different products. We didn’t use expensive, proprietary hardware. We didn’t start with one product and then attempt to Frankenstein on others over time. Our platform was purpose-built to efficiently deliver security, performance, and reliability to customers of every size from day one. And our platform has allowed us a level of efficiency to achieve the gross margins of leading hardware appliance vendors — 77% in the first half of this year — but with the greater predictability of a SaaS business model.

Our Platform Approach

For some it may be challenging to categorize our business because our platform includes an incredibly diverse set of capabilities. We provide security products like firewall and access management, performance products like intelligent routing, and reliability products like vendor-neutral load balancing — all as a service, without customers needing to install hardware or change their code.

We also have functions that play supporting roles to the products we sell. For example, we built one of the fastest, most reliable content delivery networks not because we were targeting the CDN market, but because we knew caching was a necessary function in order to efficiently deliver our core products. We built the world’s fastest authoritative domain name services, not to sell DNS, but to deliver service levels we knew our customers needed.

We provide features like CDN and DNS for free to all of our customers. We will continue to implement this strategy; onboarding more customers onto our platform and capturing value from our highly differentiated products that, once using any part of Cloudflare’s platform, are only a click away.

Potential investors who are new to Cloudflare sometimes ask questions like: “What will you do if CDN bandwidth prices continue to fall?” We remind them we’ve given CDN away for free since Cloudflare launched in 2010, not because we were trying to disrupt the CDN space, but because the much more valuable products we provide our customers need a highly optimized global caching network to perform up to our standards.

We Create More Value Than We Capture

But there is another reason for taking the approach that we do. Cloudflare has always put our customers first and prioritized creating much more value than we capture. We work to get customers onto our platform because, once on board, we know we will be able to solve so many of their problems over time. We aim to make the combined value of the products on our platform significantly more than customers can get from any combination of point solutions.

In the past, to deliver Internet security, performance, and reliability not only required an organization to buy rooms full of expensive network appliances but also to hire IT teams to manage them. While there were some companies that could afford this, the cost was prohibitive for many. Instead of serving only those that could have paid the most, we intentionally made the decision to start by focusing on organizations and individual developers that had previously been underserved. We made our products not only affordable, but easy to use.

And we didn’t stop there. We have continued to improve with every bit of traffic we have seen. In doing so, we have moved up market to the point that, today, approximately 10 percent of the Fortune 1,000 are paying Cloudflare customers. We think one of the best ways to measure the value we deliver is our Net Promoter Score of 68 among paying customers, rivaling some of the best consumer brands in the world. Not only are we obsessed with our customers, but our customers are obsessed with us.

We Are Focused on Consistent Growth Over the Long Term

One of the characteristics of the world’s greatest SaaS companies is that they typically enter a market in some small way and then use that toehold to expand their relationship and move up market. We learned from the great SaaS companies that came before us. This strategy has resulted in consistent, long-term — rather than explosive — growth. Contrast this with companies that only build a better mousetrap. They initially experience heady growth shifting defined spend from one product to another, but the challenge they then face is existential: what’s their second, third, and fourth act? Cloudflare doesn’t have this problem.

We have already begun authoring our next chapters. For example, Cloudflare Workers — the productized version of the serverless architecture we developed for ourselves — is today adopted by more than 20 percent of our new customers. Cloudflare Workers allows our developer customers to write code in the languages they know — C, C++, JavaScript, Rust, Go — and deploy it to the edge of our network, allowing anyone to create new applications with security, performance, and reliability previously reserved to the Internet giants. Cloudflare Workers, and other second-act products like it, continue to expand the types of problems we solve for our customers and the total addressable market we serve.

We will continue to invest in R&D so long as it demonstrates a significant return. Our investment philosophy is oriented around making many small, inexpensive bets — quickly killing the ones that don’t work, and increasing investment in the ones that do. While we will consider M&A when opportunities present themselves, our bias is toward internal development tightly integrated into our efficient platform. We aim to build a massive business — slowly and consistently.

Project Holloway

Finally, there are two of us signing this letter today, but three people started Cloudflare. Lee Holloway is our third co-founder and the genius who architected our platform and recruited and led our early technical team. Tragically, Lee stepped down from Cloudflare in 2015, suffering the debilitating effects of Frontotemporal Dementia, a rare neurological disease.

As we began the confidential process to go public, one of the early decisions was to pick the code name for our IPO. We chose “Project Holloway” to honor Lee’s contribution. More importantly, on a daily basis, the technical decisions Lee made, and the engineering team he built, are fundamental to the business we have become.

It has indeed been an incredible journey to have built Cloudflare into what it is today. We are grateful to our customers for their business and trust, to our team members for their dedication to our mission, and to our shareholders, and potential shareholders, for their support and encouragement.

And we’re just getting started.

Matthew Prince                     Michelle Zatlyn  
Co-founder & CEO                Co-founder & COO

02:00

GNOME 3.34 released — coming soon in Fedora 31 [Fedora Magazine]

Today the GNOME project announced the release of GNOME 3.34. This latest release of GNOME will be the default desktop environment in Fedora 31 Workstation. The Beta release of Fedora 31 is currently expected in the next week or two, with the Final release scheduled for late October.

GNOME 3.34 includes a number of new features and improvements. Congratulations and thank you to the whole GNOME community for the work that went into this release! Read on for more details.

GNOME 3.34 desktop environment at work

Notable features

The desktop itself has been refreshed with a pleasing new background. You can also compare your background images to see what they’ll look like on the desktop.

There’s a new custom application folder feature in the GNOME Shell Overview. It lets you combine applications in a group to make it easier to find the apps you use.

You already know that Boxes lets you easily download an OS and create virtual machines for testing, development, or even daily use. Now you can find sources for your virtual machines more easily, as well as boot from CD or DVD (ISO) images more easily. There is also an Express Install feature available that now supports Windows versions.

Now that you can save states when using GNOME Games, gaming is more fun. You can snapshot your progress without getting in the way of the fun. You can even move snapshots to other devices running GNOME.

More details

These are not the only features of the new and improved GNOME 3.34. For an overview, visit the official release announcement. For even more details, check out the GNOME 3.34 release notes.

The Fedora 31 Workstation Beta release is right around the corner. Fedora 31 will feature GNOME 3.34 and you’ll be able to experience it in the Beta release.

Thursday, 12 September

07:19

Saturday Morning Breakfast Cereal - Void [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I like to think my sacrilege is so stupid that it shouldn't even qualify as offensive.


Today's News:

Wednesday, 11 September

10:00

How Castle is Building Codeless Customer Account Protection [The Cloudflare Blog]

How Castle is Building Codeless Customer Account Protection
How Castle is Building Codeless Customer Account Protection

This is a guest post by Johanna Larsson, of Castle, who designed and built the Castle Cloudflare app and the supporting infrastructure.

Strong security should be easy.

Asking your consumers again and again to take responsibility for their security through robust passwords and other security measures doesn’t work. The responsibility of security needs to shift from end users to the companies who serve them.

Castle is leading the way for companies to better protect their online accounts with millions of consumers being protected every day. Uniquely, Castle extends threat prevention and protection for both pre and post login ensuring you can keep friction low but security high. With realtime responses and automated workflows for account recovery, overwhelmed security teams are given a hand. However, when you’re that busy, sometimes deploying new solutions takes more time than you have. Reducing time to deployment was a priority so Castle turned to Cloudflare Workers.

User security and friction

When security is no longer optional and threats are not black or white, security teams are left with trying to determine how to allow end-user access and transaction completions when there are hints of risk, or when not all of the information is available. Keeping friction low is important to customer experience. Castle helps organizations be more dynamic and proactive by making continuous security decisions based on realtime risk and trust.

Some of the challenges with traditional solutions is that they are often just focused on protecting the app or they are only focused on point of access, protecting against bot access for example. Tools specifically designed for securing user accounts however are fundamentally focused on protecting the accounts of the end-users, whether they are being targeting by human or bots. Being able to understand end-user behaviors and their devices both pre and post login is therefore critical in being able to truly protect each users. The key to protecting users is being able to decipher between normal and anomalous activity on an individual account and device basis. You also need a playbook to respond to anomalies and attacks with dedicated flows, that allows your end users to interact directly and provide feedback around security events.

By understanding the end user and their good behaviors, devices, and transactions, it is possible to automatically respond to account threats in real-time based on risk level and policy. This approach not only reduces end-user friction but enables security teams to feel more confident that they won't ever be blocking a legitimate login or transaction.

Castle processes tens of millions of events every day through its APIs, including contextual information like headers, IP, and device types. The more information that can be associated with a request the better. This allows us to better recognize abnormalities and protect the end user. Collection of this information is done in two ways. One is done on the web application's backend side through our SDKs and the other is done on the client side using our mobile SDK or browser script. Our experience shows that any integration of a security service based on user behavior and anomaly detection can involve many different parties across an organization, and it affects multiple layers of the tech stack. On top of the security related roles, it's not unusual to also have to coordinate between backend, devops, and frontend teams. The information related to an end user session is often spread widely over a code base.

The cost of security

One of the biggest challenges in implementing a user-facing security and risk management solution is the variety of people and teams it needs attention from, each with competing priorities. Security teams are often understaffed and overwhelmed making it difficult to take on new projects. At the same time, it consumes time from product and engineering personnel on the application side, who are responsible for UX flows and performing continuous authentication post-login.

We've been experimenting with approaches where we can extract that complexity from your application code base, while also reducing the effort of integrating. At Castle, we believe that strong security should be easy.

How Castle is Building Codeless Customer Account Protection

With Cloudflare we found a service that enables us to create a more friendly, simple, and in the end, safe integration process by placing the security layer directly between the end user and your application. Security-related logic shouldn't pollute your app, but should reside in a separate service, or shield, that covers your app. When the two environments are kept separate, this reduces the time and cost of implementing complex systems making integration and maintenance less stressful and much easier.

Our integration with Cloudflare aims to solve this implementation challenge, delivering end-to-end account protection for your users, both pre and post login, with the click of a button.

The codeless integration

In our quest for a purely codeless integration, key features are required. When every customer application is different, this means every integration is different. We want to solve this problem for you once and for all. To do this, we needed to move the security work away from the implementation details so that we could instead focus on describing the key interactions with the end user, like logins or bank transactions. We also wanted to empower key decision makers to recognize and handle crucial interactions in their systems. Creating a single solution that could be customized to fit each specific use case was a priority.

Building on top of Cloudflare's platform, we made use of three unique and powerful products: Workers, Apps for Workers, and Workers KV.

Thanks to Workers we have full access to the interactions between the end user and your application. With their impressive performance, we can confidently run inline of website requests without creating noticeable latency. We will never slow down your site. And in order to achieve the flexibility required to match your specific use case, we created an internal configuration format that fully describes the interactions of devices and servers across HTTP, including web and mobile app traffic. It is in this Worker where we've implemented an advanced routing engine to match and collect information about requests and responses to events, directly from the edge. It also fully handles injecting the Castle browser script — one less thing to worry about.

All of this logic is kept separate from your application code, and through the Cloudflare App Store we are able to distribute this Worker, giving you control over when and where it is enabled, as well as what configurations are used. There's no need to copy/paste code or manage your own Workers.

In order to achieve the required speed while running in distributed edge locations, we needed a high performing low latency datastore, and we found one in the Cloudflare Workers KV Store. Cloudflare Apps are not able to access the KV Store directly, but we've solved this by exposing it through a separate Worker that the Castle App connects to. Because traffic between Workers never leaves the Cloudflare network, this is both secure and fast enough to match your requirements. The KV Store allows us to maintain end user sessions across the world, and also gives us a place to store and update the configurations and sessions that drive the Castle App.

In combining these products we have a complete and codeless integration that is fully configurable and that won't slow you down.

How does it work?

The data flow is straightforward. After installing the Castle App, Cloudflare will route your traffic through the Castle App, which uses the Castle Data Store and our API to intelligently protect your end users. The impact to traffic latency is minimal because most work is done in the background, not blocking the requests. Let's dig deeper into each technical feature:

Script injection

One of the tools we use to verify user identity is a browser script: Castle.js. It is responsible for gathering device information and UI interaction behavior, and although it is not required for our service to function, it helps improve our verdicts. This means it's important that it is properly added to every page in your web application. The Castle App, running between the end user and your application, is able to unobtrusively add the script to each page as it is served. In order for the script to also track page interactions it needs to be able to connect them to your users, which is done through a call to our script and also works out of the box with the Cloudflare interaction. This removes 100% of the integration work from your frontend teams.

Collect contextual information

The second half of the information that forms the basis of our security analysis is the information related to the request itself, such as IP and headers, as well as timestamps. Gathering this information may seem straightforward, but our experience shows some recurring problems in traditional integrations. IP-addresses are easily lost behind reverse proxies, as they need to be maintained as separate headers, like `X-Forwarded-For`, and the internal format of headers differs from platform to platform. Headers in general might get cut off based on whitelisting. The Castle App sees the original request as it comes in, with no outside influence or platform differences, enabling it to reliably create the context of the request. This saves your infrastructure and backend engineers from huge efforts debugging edge cases.

Advanced routing engine

Finally, in order to reliably recognize important events, like login attempts, we've built a fully configurable routing engine. This is fast enough to run inline of your web application, and supports near real-time configuration updates. It is powerful enough to translate requests to actual events in your system, like logins, purchases, profile updates or transactions. Using information from the request, it is then able to send this information to Castle, where you are able to analyze, verify and take action on suspicious activity. What's even better, is that at any point in the future if you want to Castle protect a new critical user event - such as a withdrawal or transfer event - all it takes is adding a record to the configuration file. You never have to touch application code in order to expand your Castle integration across sensitive events.

We've put together an example TypeScript snippet that naively implements the flow and features we've discussed. The details are glossed over so that we can focus on the functionality.

addEventListener(event => event.respondWith(handleEvent(event)));

const respondWith = async (event: CloudflareEvent) => {
  // You configure the application with your Castle API key
  const { apiKey } = INSTALL_OPTIONS;
  const { request } = event;

  // Configuration is fetched from the KV Store
  const configuration = await getConfiguration(apiKey);

  // The session is also retrieved from the KV Store
  const session = await getUserSession(request);

  // Pass the request through and get the response
  let response = await fetch(request);

  // Using the configuration we can recognize events by running
  // the request+response and configuration through our matching engine
  const securityEvent = getMatchingEvent(request, response, configuration);

  if (securityEvent) {
    // With direct access to the raw request, we can confidently build the context
    // including a device ID generated by the browser script, IP, and headers
    const requestContext = getRequestContext(request);

    // Collecting the relevant information, the data is passed to the Castle API
    event.waitUntil(sendToCastle(securityEvent, session, requestContext));
  }

  // Because we have access to the response HTML page we can safely inject the browser
  // script. If the response is not an HTML page it is passed through untouched.
  response = injectScript(response, session);

  return response;
};

We hope we have inspired you and demonstrated how Workers can provide speed and flexibility when implementing end to end account protection for your end users with Castle. If you are curious about our service, learn more here.

07:34

Saturday Morning Breakfast Cereal - Apology [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Everyone seems to think this is about current events, but I wrote it weeks ago.


Today's News:

Thanks for giving it a look!

07:26

Saturday Morning Breakfast Cereal - Entropy [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
First parent to admonish their kid to not create so much information gets 10 Weinersmith Points.


Today's News:

It's a double update day. Stay tuned.

02:00

How to set up a TFTP server on Fedora [Fedora Magazine]

TFTP, or Trivial File Transfer Protocol, allows users to transfer files between systems using the UDP protocol. By default, it uses UDP port 69. The TFTP protocol is extensively used to support remote booting of diskless devices. So, setting up a TFTP server on your own local network can be an interesting way to do Fedora installations, or other diskless operations.

TFTP can only read and write files to or from a remote system. It doesn’t have the capability to list files or make any changes on the remote server. There are also no provisions for user authentication. Because of security implications and the lack of advanced features, TFTP is generally only used on a local area network (LAN).

TFTP server installation

The first thing you will need to do is install the TFTP client and server packages:

dnf install tftp-server tftp -y

This creates a tftp service and socket file for systemd under /usr/lib/systemd/system.

/usr/lib/systemd/system/tftp.service
/usr/lib/systemd/system/tftp.socket

Next, copy and rename these files to /etc/systemd/system:

cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service

cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket

Making local changes

You need to edit these files from the new location after you’ve copied and renamed them, to add some additional parameters. Here is what the tftp-server.service file initially looks like:

[Unit]
Description=Tftp Server
Requires=tftp.socket
Documentation=man:in.tftpd

[Service]
ExecStart=/usr/sbin/in.tftpd -s /var/lib/tftpboot
StandardInput=socket

[Install]
Also=tftp.socket

Make the following changes to the [Unit] section:

Requires=tftp-server.socket

Make the following changes to the ExecStart line:

ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot

Here are what the options mean:

  • The -c option allows new files to be created.
  • The -p option is used to have no additional permissions checks performed above the normal system-provided access controls.
  • The -s option is recommended for security as well as compatibility with some boot ROMs which cannot be easily made to include a directory name in its request.

The default upload/download location for transferring the files is /var/lib/tftpboot.

Next, make the following changes to the [Install] section:

[Install]
WantedBy=multi-user.target
Also=tftp-server.socket

Don’t forget to save your changes!

Here is the completed /etc/systemd/system/tftp-server.service file:

[Unit]
Description=Tftp Server
Requires=tftp-server.socket
Documentation=man:in.tftpd

[Service]
ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
StandardInput=socket

[Install]
WantedBy=multi-user.target
Also=tftp-server.socket

Starting the TFTP server

Reload the systemd daemon:

systemctl daemon-reload

Now start and enable the server:

systemctl enable --now tftp-server

To change the permissions of the TFTP server to allow upload and download functionality, use this command. Note TFTP is an inherently insecure protocol, so this may not be advised on a network you share with other people.

chmod 777 /var/lib/tftpboot

Configure your firewall to allow TFTP traffic:

firewall-cmd --add-service=tftp --perm
firewall-cmd --reload

Client Configuration

Install the TFTP client:

yum install tftp -y

Run the tftp command to connect to the TFTP server. Here is an example that enables the verbose option:

[client@thinclient:~ ]$ tftp 192.168.1.164
tftp> verbose
Verbose mode on.
tftp> get server.logs
getting from 192.168.1.164:server.logs to server.logs [netascii]
Received 7 bytes in 0.0 seconds [inf bits/sec]
tftp> quit
[client@thinclient:~ ]$ 

Remember, TFTP does not have the ability to list file names. So you’ll need to know the file name before running the get command to download any files.


Photo by Laika Notebooks on Unsplash.

Tuesday, 10 September

09:50

Saturday Morning Breakfast Cereal - Wax [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Now, I need you to run a hair dryer over for me for seventeen hours.


Today's News:

Monday, 09 September

08:49

Saturday Morning Breakfast Cereal - Pocket [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you grow cotton, can you say 'I'm in the farm of big pocket' ?


Today's News:

02:00

Firefox 69 available in Fedora [Fedora Magazine]

When you install the Fedora Workstation, you’ll find the world-renowned Firefox browser included. The Mozilla Foundation underwrites work on Firefox, as well as other projects that promote an open, safe, and privacy respecting Internet. Firefox already features a fast browsing engine and numerous privacy features.

A community of developers continues to improve and enhance Firefox. The latest version, Firefox 69, was released recently and you can get it for your stable Fedora system (30 and later). Read on for more details.

New features in Firefox 69

The newest version of Firefox includes Enhanced Tracking Protection (or ETP). When you use Firefox 69 with a new (or reset) settings profile, the browser makes it harder for sites to track your information or misuse your computer resources.

For instance, less scrupulous websites use scripts that cause your system to do lots of intense calculations to produce cryptocurrency results, called cryptomining. Cryptomining happens without your knowledge or permission and is therefore a misuse of your system. The new standard setting in Firefox 69 prevents sites from this kind of abuse.

Firefox 69 has additional settings to prevent sites from identifying or fingerprinting your browser for later use. These improvements give you additional protection from having your activities tracked online.

Another common annoyance is videos that start in your browser without warning. Video playback also uses extra CPU power and you may not want this happening on your laptop without permission. Firefox already stops this from happening using the Block Autoplay feature. But Firefox 69 also lets you stop videos from playing even if they start without sound. This feature prevents unwanted sudden noise. It also solves more of the real problem — having your computer’s power used without permission.

There are numerous other new features in the new release. Read more about them in the Firefox release notes.

How to get the update

Firefox 69 is available in the stable Fedora 30 and pre-release Fedora 31 repositories, as well as Rawhide. The update is provided by Fedora’s maintainers of the Firefox package. The maintainers also ensured an update to Mozilla’s Network Security Services (the nss package). We appreciate the hard work of the Mozilla project and Firefox community in providing this new release.

If you’re using Fedora 30 or later, use the Software tool on Fedora Workstation, or run the following command on any Fedora system:

$ sudo dnf --refresh upgrade firefox

If you’re on Fedora 29, help test the update for that release so it can become stable and easily available for all users.

Firefox may prompt you to upgrade your profile to use the new settings. To take advantage of new features, you should do this.

Sunday, 08 September

18:00

Breaking Down Technical Interviews [Yelp Engineering and Product Blog]

Finding that first job in the tech industry can be a daunting task. You might not get a response to your application, or maybe you’ll move forward with the interview, but it doesn’t pan out in the end. You might wonder, “How are other people successful at getting offers? What do others do differently? What’s the secret to getting through this arduous process?” The answer is pretty simple: lots of practice. While there’s never a guarantee of getting an offer, following these recommendations can increase your chances of successfully going through the interview process and potentially landing your dream job!...

10:48

Saturday Morning Breakfast Cereal - p [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I don't like this new world where you have to use complicated stats to get the answer that'll improve your job prospects.


Today's News:

Saturday, 07 September

08:39

Saturday Morning Breakfast Cereal - Summertime [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I've only recently moved to the tick part of the universe, and basically AAAAAAAAAAAAAAAAAAAAAA


Today's News:

Friday, 06 September

06:59

Saturday Morning Breakfast Cereal - God Computer [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Look, if you wanted this to work you shouldn't have evolved apes in the first place, okay?


Today's News:

02:00

Performing storage management tasks in Cockpit [Fedora Magazine]

In the previous article we touched upon some of the new features introduced to Cockpit over the years. This article will look into some of the tools within the UI to perform everyday storage management tasks. To access these functionalities, install the cockpit-storaged package:

 sudo dnf install cockpit-storaged

From the main screen, click the Storage menu option in the left column. Everything needed to observe and manage disks is available on the main Storage screen. Also, the top of the page displays two graphs for the disk’s reading and writing performance with the local filesystem’s information below. In addition, the options to add or modify RAID devices, volume groups, iSCSI devices, and drives are available as well. In addition, scrolling down will reveal a summary of recent logs. This allows admins to catch any errors that require immediate attention.

Cockpit storage main screen

Filesystems

This section lists the system’s mounted partitions. Clicking on a partition will display information and options for that mounted drive. Growing and shrinking partitions are available in the Volume sub-section. There’s also a filesystem subsection that allows you to change the label and configure the mount.

If it’s part of a volume group, other logical volumes in that group will also be available. Each standard partition has the option to delete and format. Also, logical volumes have an added option to deactivate the partition.

Example screenshot of the /boot filesystem in Cockpit

RAID devices

Cockpit makes it super-easy to manage RAID drives. With a few simple clicks the RAID drive is created, formatted, encrypted, and mounted. For details, or a how-to on creating a RAID device from the CLI check out the article Managing RAID arrays with mdadm.

To create a RAID device, start by clicking the add (+) button. Enter a name, select the type of RAID level and the available drives, then click Create. The RAID section will show the newly created device. Select it to create the partition table and format the drive(s). You can always remove the device by clicking the Stop and Delete buttons in the top-right corner.

Creating a RAID device in Cockpit

Logical volumes

By default, the Fedora installation uses LVM when creating the partition scheme. This allows users to create groups, and add volumes from different disks to those groups. The article, Use LVM to Upgrade Fedora, has some great tips and explanations on how it works in the command-line.

Start by clicking the add (+) button next to “Volume Groups”. Give the group a name, select the disk(s) for the volume group, and click Create. The new group is available in the Volume Groups section. The example below demonstrates a new group named “vgraiddemo”.

Now, click the newly made group then select the option to Create a New Logical Volume. Give the LV a name and select the purpose: Block device for filesystems, or pool for thinly provisioning volumes. Adjust the amount of storage, if necessary, and click the Format button to finalize the creation.

Creating a volume group and assigning disks to that volume group.

Cockpit can also configure current volume groups. To add a drive to an existing group, click the name of the volume group, then click the add (+) button next to “Physical Volumes”. Select the disk from the list and click the Add button. In one shot, not only has a new PV, been created, but it’s also added to the group. From here, we can add the available storage to a partition, or create a new LV. The example below demonstrates how the additional space is used to grow the root filesystem.

iSCSI targets

Connecting to an iSCSI server is a quick process and requires two things, the initiator’s name, which is assigned to the client, and the name or IP of the server, or target. Therefore we will need to change the initiator’s name on the system to match the configurations on the target server.

To change the initiator’s name, click the button with the pencil icon, enter the name, and click Change.

To add the iSCSI target, click the add (+) button, enter the server’s address, the username and password, if required, and click Next. Select the target — verify the name, address, and port, — and click Add to finalize the process.

To remove a target, click the “checkmark” button. A red trashcan will appear beside the target(s). Click it to remove the target from the setup list.

Adding an iSCSI target in Cockpit

NFS mount

Cockpit even allows sysadmins to configure NFS shares within the UI. To add NFS shares, click the add (+) button in the NFS mounts section. Enter the server’s address, the path of the share on the server, and a location on the local machine to mount the share. Adjust the mount options if needed and click Add to view information about the share. We also have the options to unmount, edit, and remove the share. The example below demonstrates how the NFS share on SERVER02 is mounted to the /mnt directory.

Conclusion

As we’ve seen in this article, a lot of the storage-related tasks that require lengthy, and multiple, lines of commands can be easily done within the web UI with just a few clicks. Cockpit is continuously evolving and every new feature makes the project better and better. In the next article we’ll explore the features and components on the networking side of things.

Thursday, 05 September

09:23

Saturday Morning Breakfast Cereal - VRRRR [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I think there could be a whole field of study based on analyzing what pets are afraid of and assuming it exists in the fossil record.


Today's News:

08:24

Fast WordPress Sites with Bluehost & Cloudflare Workers [The Cloudflare Blog]

Fast WordPress Sites with Bluehost & Cloudflare Workers
Fast WordPress Sites with Bluehost & Cloudflare Workers

WordPress is the most popular CMS (content management system) in the world, powering over a third of the top 10 million websites, according to W3Techs.

WordPress is an open source software project that many website service providers host for end customers to enable them to build WordPress sites and serve that content to visitors over the Internet.  For hosting providers, one of the opportunities and challenges is to host one version of WordPress on their infrastructure that is high performing for all their customers without modifying the WordPress code on a per customer basis.

Hosting providers are increasingly turning to Cloudflare’s Serverless Workers Platform to deliver high performance to their end customers by fixing performance issues at the edge while avoiding modifying code on an individual site basis.

One innovative WordPress hosting provider that Cloudflare has been working with to do this is Bluehost, a recommended web host by WordPress.org. In collaboration with Bluehost, Cloudflare’s Workers have been able to achieve a 40% performance improvement for those sites running Workers. Bluehost started with Cloudflare Workers code for Fast Google Fonts which in-lines the browser-specific font CSS and re-hosts the font files through the page origin. This removes the multiple calls to load the CSS and the font file from Google and improves WordPress site response time.  Bluehost then went further and added additional performance enhancements that rehosted commonly run third party scripts and caches dynamic HTML on the edge in conjunction with Bluehost’s own plugin infrastructure.

Bluehost will offer Cloudflare Workers in early 2020. Once implemented, customers will see faster response times, which could result in more website visitors sticking with the site while it renders. Additional benefits could include improved ad dollars from a higher number of impressions and ecommerce revenue from more shoppers.

“We were so impressed to see a 40% performance improvement for websites leveraging Workers, and can’t wait to offer this to our customers in 2020. Our team is excited to partner with Cloudflare and continue to innovate with Workers for added benefits for our customers,” said Suhaib Zaheer, General Manager for Bluehost.

Stay tuned for more performance improvements with Cloudflare Workers!

Wednesday, 04 September

08:01

Saturday Morning Breakfast Cereal - Juliet [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I really should've invented an herbal heart 'medication' to bundle with this comic.


Today's News:

03:45

How to build Fedora container images [Fedora Magazine]

With the rise of containers and container technology, all major Linux distributions nowadays provide a container base image. This article presents how the Fedora project builds its base image. It also shows you how to use it to create a layered image.

Base and layered images

Before we look at how the Fedora container base image is built, let’s define a base image and a layered image. A simple way to define a base image is an image that has no parent layer. But what does that concretely mean? It means a base image usually contains only the root file system (rootfs) of an operating system. The base image generally provides the tools needed to install software in order to create layered images.

A layered image adds a collections of layers on top of the base image in order to install, configure, and run an application. Layered images reference base images in a Dockerfile using the FROM instruction:

FROM fedora:latest

How to build a base image

Fedora has a full suite of tools available to build container images. This includes podman, which does not require running as the root user.

Building a rootfs

A base image comprises mainly a tarball. This tarball contains a rootfs. There are different ways to build this rootfs. The Fedora project uses the kickstart installation method coupled with imagefactory software to create these tarballs.

The kickstart file used during the creation of the Fedora base image is available in Fedora’s build system Koji. The Fedora-Container-Base package regroups all the base image builds. If you select a build, it gives you access to all the related artifacts, including the kickstart files. Looking at an example, the %packages section at the end of the file defines all the packages to install. This is how you make software available in the base image.

Using a rootfs to build a base image

Building a base image is easy, once a rootfs is available. It requires only a Dockerfile with the following instructions:

FROM scratch
ADD layer.tar /
CMD ["/bin/bash"]

The important part here is the FROM scratch instruction, which is creating an empty image. The following instructions then add the rootfs to the image, and set the default command to be executed when the image is run.

Let’s build a base image using a Fedora rootfs built in Koji:

$ curl -o fedora-rootfs.tar.xz https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz
$ tar -xJvf fedora-rootfs.tar.xz 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar 
$ mv 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar layer.tar
$ printf "FROM scratch\nADD layer.tar /\nCMD [\"/bin/bash\"]" > Dockerfile
$ podman build -t my-fedora .
$ podman run -it --rm my-fedora cat /etc/os-release

The layer.tar file which contains the rootfs needs to be extracted from the downloaded archive. This is only needed because Fedora generates images that are ready to be consumed by a container run-time.

So using Fedora’s generated image, it’s even easier to get a base image. Let’s see how that works:

$ curl -O https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz
$ podman load --input Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz
$ podman run -it --rm localhost/fedora-container-base-rawhide-20190902.n.0.x86_64:latest cat /etc/os-release

Building a layered image

To build a layered image that uses the Fedora base image, you only need to specify fedora in the FROM line instruction:

FROM fedora:latest

The latest tag references the latest active Fedora release (Fedora 30 at the time of writing). But it is possible to get other versions using the image tag. For example, FROM fedora:31 will use the Fedora 31 base image.

Fedora supports building and releasing software as containers. This means you can maintain a Dockerfile to make your software available to others. For more information about becoming a container image maintainer in Fedora, check out the Fedora Containers Guidelines.

Tuesday, 03 September

07:10

Saturday Morning Breakfast Cereal - Wishes [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I wonder how many genie wishes don't get used because it'd be too awkward to request them.


Today's News:

Hey dorks! The new xkcd is out. Check it out!

Monday, 02 September

18:00

Throw [xkcd.com]

This comic best viewed on xkcd.com

07:55

Saturday Morning Breakfast Cereal - Great Expectations [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I didn't want to plot two curves, but 'actual staying power' is a function of time in minutes, given by f(t)=2


Today's News:

Lovely reviews have been coming in for the new book, but it's always nice to get one from Tim Harford.

02:00

How RPM packages are made: the spec file [Fedora Magazine]

In the previous article on RPM package building, you saw that source RPMS include the source code of the software, along with a “spec” file. This post digs into the spec file, which contains instructions on how to build the RPM. Again, this article uses fpaste as an example.

Understanding the source code

Before you can start writing a spec file, you need to have some idea of the software that you’re looking to package. Here, you’re looking at fpaste, a very simple piece of software. It is written in Python, and is a one file script. When a new version is released, it’s provided here on Pagure: https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz

The current version, as the archive shows, is 0.3.9.2. Download it so you can see what’s in the archive:

$ wget https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
$ tar -tvf fpaste-0.3.9.2.tar.gz
drwxrwxr-x root/root         0 2018-07-25 02:58 fpaste-0.3.9.2/
-rw-rw-r-- root/root        25 2018-07-25 02:58 fpaste-0.3.9.2/.gitignore
-rw-rw-r-- root/root      3672 2018-07-25 02:58 fpaste-0.3.9.2/CHANGELOG
-rw-rw-r-- root/root     35147 2018-07-25 02:58 fpaste-0.3.9.2/COPYING
-rw-rw-r-- root/root       444 2018-07-25 02:58 fpaste-0.3.9.2/Makefile
-rw-rw-r-- root/root      1656 2018-07-25 02:58 fpaste-0.3.9.2/README.rst
-rw-rw-r-- root/root       658 2018-07-25 02:58 fpaste-0.3.9.2/TODO
drwxrwxr-x root/root         0 2018-07-25 02:58 fpaste-0.3.9.2/docs/
drwxrwxr-x root/root         0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/
drwxrwxr-x root/root         0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/
-rw-rw-r-- root/root      3867 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/fpaste.1
-rwxrwxr-x root/root     24884 2018-07-25 02:58 fpaste-0.3.9.2/fpaste
lrwxrwxrwx root/root         0 2018-07-25 02:58 fpaste-0.3.9.2/fpaste.py -> fpaste

The files you want to install are:

  • fpaste.py: which should go be installed to /usr/bin/.
  • docs/man/en/fpaste.1: the manual, which should go to /usr/share/man/man1/.
  • COPYING: the license text, which should go to /usr/share/license/fpaste/.
  • README.rst, TODO: miscellaneous documentation that goes to /usr/share/doc/fpaste.

Where these files are installed depends on the Filesystem Hierarchy Standard. To learn more about it, you can either read here: http://www.pathname.com/fhs/ or look at the man page on your Fedora system:

$ man hier

Part 1: What are we building?

Now that we know what files we have in the source, and where they are to go, let’s look at the spec file. You can see the full file here: https://src.fedoraproject.org/rpms/fpaste/blob/master/f/fpaste.spec

Here is the first part of the spec file:

Name:   fpaste
Version:  0.3.9.2
Release:  3%{?dist}
Summary:  A simple tool for pasting info onto sticky notes instances
BuildArch:  noarch
License:  GPLv3+
URL:    https://pagure.io/fpaste
Source0:  https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz

Requires:    python3

%description
It is often useful to be able to easily paste text to the Fedora
Pastebin at http://paste.fedoraproject.org and this simple script
will do that and return the resulting URL so that people may
examine the output. This can hopefully help folks who are for
some reason stuck without X, working remotely, or any other
reason they may be unable to paste something into the pastebin

Name, Version, and so on are called tags, and are defined in RPM. This means you can’t just make up tags. RPM won’t understand them if you do! The tags to keep an eye out for are:

  • Source0: tells RPM where the source archive for this software is located.
  • Requires: lists run-time dependencies for the software. RPM can automatically detect quite a few of these, but in some cases they must be mentioned manually. A run-time dependency is a capability (often a package) that must be on the system for this package to function. This is how dnf detects whether it needs to pull in other packages when you install this package.
  • BuildRequires: lists the build-time dependencies for this software. These must generally be determined manually and added to the spec file.
  • BuildArch: the computer architectures that this software is being built for. If this tag is left out, the software will be built for all supported architectures. The value noarch means the software is architecture independent (like fpaste, which is written purely in Python).

This section provides general information about fpaste: what it is, which version is being made into an RPM, its license, and so on. If you have fpaste installed, and look at its metadata, you can see this information included in the RPM:

$ sudo dnf install fpaste
$ rpm -qi fpaste
Name        : fpaste
Version     : 0.3.9.2
Release     : 2.fc30
...

RPM adds a few extra tags automatically that represent things that it knows.

At this point, we have the general information about the software that we’re building an RPM for. Next, we start telling RPM what to do.

Part 2: Preparing for the build

The next part of the spec is the preparation section, denoted by %prep:

%prep
%autosetup

For fpaste, the only command here is %autosetup. This simply extracts the tar archive into a new folder and keeps it ready for the next section where we build it. You can do more here, like apply patches, modify files for different purposes, and so on. If you did look at the contents of the source rpm for Python, you would have seen lots of patches there. These are all applied in this section.

Typically anything in a spec file with the % prefix is a macro or label that RPM interprets in a special way. Often these will appear with curly braces, such as %{example}.

Part 3: Building the software

The next section is where the software is built, denoted by “%build”. Now, since fpaste is a simple, pure Python script, it doesn’t need to be built. So, here we get:

%build
#nothing required

Generally, though, you’d have build commands here, like:

configure; make

The build section is often the hardest section of the spec, because this is where the software is being built from source. This requires you to know what build system the tool is using, which could be one of many: Autotools, CMake, Meson, Setuptools (for Python) and so on. Each has its own commands and style. You need to know these well enough to get the software to build correctly.

Part 4: Installing the files

Once the software is built, it needs to be installed in the %install section:

%install
mkdir -p %{buildroot}%{_bindir}
make install BINDIR=%{buildroot}%{_bindir} MANDIR=%{buildroot}%{_mandir}

RPM doesn’t tinker with your system files when building RPMs. It’s far too risky to add, remove, or modify files to a working installation. What if something breaks? So, instead RPM creates an artificial file system and works there. This is referred to as the buildroot. So, here in the buildroot, we create /usr/bin, represented by the macro %{_bindir}, and then install the files to it using the provided Makefile.

At this point, we have a built version of fpaste installed in our artificial buildroot.

Part 5: Listing all files to be included in the RPM

The last section of the spec file is the files section, %files. This is where we tell RPM what files to include in the archive it creates from this spec file. The fpaste file section is quite simple:

%files
%{_bindir}/%{name}
%doc README.rst TODO
%{_mandir}/man1/%{name}.1.gz
%license COPYING

Notice how, here, we do not specify the buildroot. All of these paths are relative to it. The %doc and %license commands simply do a little more—they create the required folders and remember that these files must go there.

RPM is quite smart. If you’ve installed files in the %install section, but not listed them, it’ll tell you this, for example.

Part 6: Document all changes in the change log

Fedora is a community based project. Lots of contributors maintain and co-maintain packages. So it is imperative that there’s no confusion about what changes have been made to a package. To ensure this, the spec file contains the last section, the Changelog, %changelog:

%changelog
* Thu Jul 25 2019 Fedora Release Engineering < ...> - 0.3.9.2-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild
 
* Thu Jan 31 2019 Fedora Release Engineering < ...> - 0.3.9.2-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild
 
* Tue Jul 24 2018 Ankur Sinha  - 0.3.9.2-1
- Update to 0.3.9.2
 
* Fri Jul 13 2018 Fedora Release Engineering < ...> - 0.3.9.1-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_29_Mass_Rebuild
 
* Wed Feb 07 2018 Fedora Release Engineering < ..> - 0.3.9.1-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild
 
* Sun Sep 10 2017 Vasiliy N. Glazov < ...> - 0.3.9.1-2
- Cleanup spec
 
* Fri Sep 08 2017 Ankur Sinha  - 0.3.9.1-1
- Update to latest release
- fixes rhbz 1489605
...
....

There must be a changelog entry for every change to the spec file. As you see here, while I’ve updated the spec as the maintainer, others have too. Having the changes documented clearly helps everyone know what the current status of the spec is. For all packages installed on your system, you can use rpm to see their changelogs:

$ rpm -q --changelog fpaste

Building the RPM

Now we are ready to build the RPM. If you want to follow along and run the commands below, please ensure that you followed the steps in the previous post to set your system up for building RPMs.

We place the fpaste spec file in ~/rpmbuild/SPECS, the source code archive in ~/rpmbuild/SOURCES/ and can now create the source RPM:

$ cd ~/rpmbuild/SPECS
$ wget https://src.fedoraproject.org/rpms/fpaste/raw/master/f/fpaste.spec

$ cd ~/rpmbuild/SOURCES
$ wget https://pagure.io/fpaste/archive/0.3.9.2/fpaste-0.3.9.2.tar.gz

$ cd ~/rpmbuild/SPECS
$ rpmbuild -bs fpaste.spec
Wrote: /home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm

Let’s have a look at the results:

$ ls ~/rpmbuild/SRPMS/fpaste*
/home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm

$ rpm -qpl ~/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
fpaste-0.3.9.2.tar.gz
fpaste.spec

There we are — the source rpm has been built. Let’s build both the source and binary rpm together:

$ cd ~/rpmbuild/SPECS
$ rpmbuild -ba fpaste.spec
..
..
..

RPM will show you the complete build output, with details on what it is doing in each section that we saw before. This “build log” is extremely important. When builds do not go as expected, we packagers spend lots of time going through them, tracing the complete build path to see what went wrong.

That’s it really! Your ready-to-install RPMs are where they should be:

$ ls ~/rpmbuild/RPMS/noarch/
fpaste-0.3.9.2-3.fc30.noarch.rpm

Recap

We’ve covered the basics of how RPMs are built from a spec file. This is by no means an exhaustive document. In fact, it isn’t documentation at all, really. It only tries to explain how things work under the hood. Here’s a short recap:

  • RPMs are of two types: source and binary.
  • Binary RPMs contain the files to be installed to use the software.
  • Source RPMs contain the information needed to build the binary RPMs: the complete source code, and the instructions on how to build the RPM in the spec file.
  • The spec file has various sections, each with its own purpose.

Here, we’ve built RPMs locally, on our Fedora installations. While this is the basic process, the RPMs we get from repositories are built on dedicated servers with strict configurations and methods to ensure correctness and security. This Fedora packaging pipeline will be discussed in a future post.

Would you like to get started with building packages, and help the Fedora community maintain the massive amount of software we provide? You can start here by joining the package collection maintainers.

For any queries, post to the Fedora developers mailing list—we’re always happy to help!

References

Here are some useful references to building RPMs:


Sunday, 01 September

18:00

New book: How To [xkcd.com]

Hey there!

I'm excited to announce that my new book, How To, will be going on sale in a few hours!



I'm really proud of this book. It features information on everything from opening water bottles with nuclear weapons to how to be on time for meetings by altering the rotation of the Earth. It also includes real-life tips and advice from a number of experts who generously lent their time, including Col. Chris Hadfield and Serena Williams.

You can order it now on Amazon, Barnes & Noble, IndieBound, and Apple Books.

08:38

Saturday Morning Breakfast Cereal - Digital Arts [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I keep trying to think of a joke, but I just can't improve on the profundity of the red button comic.


Today's News:

Saturday, 31 August

06:15

Saturday Morning Breakfast Cereal - Thor [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Fun Fact: Thor was actually a trucker from the 1970s.


Today's News: